Index of papers in Proc. ACL 2008 that mention
  • error rate
Bartlett, Susan and Kondrak, Grzegorz and Cherry, Colin
Abstract
In comparison with a state-of-the-art syllabification system, we reduce the syllabification word error rate for English by 33%.
Introduction
With this approach, we reduce the error rate for English by 33%, relative to the best existing system.
L2P Performance
In English, perfect syllabification produces a relative error reduction of 10.6%, and our model captures over half of the possible improvement, reducing the error rate by 6.0%.
L2P Performance
Although perfect syllabification reduces their L2P relative error rate by 18%, they find that their learned model actually increases the error rate .
L2P Performance
For Dutch, perfect syllabification reduces the relative L2P error rate by 17.5%; we realize over 70% of the available improvement with our syllabification model, reducing the relative error rate by 12.4%.
Syllabification Experiments
Syllable break error rate (SB ER) captures the incorrect tags that cause an error in syllabification.
Syllabification Experiments
Table 1 presents the word accuracy and syllable break error rate achieved by each of our tag sets on both the CELEX and NETtalk datasets.
Syllabification Experiments
Overall, our best tag set lowers the error rate by one-third, relative to SbA’s performance.
error rate is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Fleischman, Michael and Roy, Deb
Abstract
Results show that grounded language models improve perplexity and word error rate over text based language models, and further, support video information retrieval better than human generated speech transcriptions.
Evaluation
We evaluate our grounded language modeling approach using 3 metrics: perpleXity, word error rate , and precision on an information retrieval task.
Evaluation
4.2 Word Accuracy and Error Rate
Evaluation
Word error rate (WER) is a normalized measure of the number of word insertions, substitutions, and deletions required to transform the output transcription of an ASR system to a human generated gold standard transcription of the same utterance.
Introduction
Results indicate improved performance using three metrics: perplexity, word error rate , and precision on an information retrieval task.
error rate is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Goldwater, Sharon and Jurafsky, Dan and Manning, Christopher D.
Abstract
This paper analyzes a variety of lexical, prosodic, and disfluency factors to determine which are likely to increase ASR error rates .
Abstract
(3) Although our results are based on output from a system with speaker adaptation, speaker differences are a major factor influencing error rates , and the effects of features such as frequency, pitch, and intensity may vary between speakers.
Data
The standard measure of error used in ASR is word error rate (WER), computed as 100(I —|— D —|—S ) / R, where I , D and S are the number of insertions, deletions, and substitutions found by aligning the ASR hypotheses with the reference transcriptions, and R is the number of reference words.
Data
Since we wish to know what features of a reference word increase the probability of an error, we need a way to measure the errors attributable to individual words — an individual word error rate (IWER).
Introduction
Previous work on recognition of spontaneous monologues and dialogues has shown that infrequent words are more likely to be misrecognized (Fosler—Lussier and Morgan, 1999; Shinozaki and Furui, 2001) and that fast speech increases error rates (Siegler and Stern, 1995; Fosler—Lussier and Morgan, 1999; Shinozaki
Introduction
Siegler and Stern (1995) and Shinozaki and Furui (2001) also found higher error rates in very slow speech.
Introduction
Word length (in phones) has also been found to be a useful predictor of higher error rates (Shinozaki and Furui, 2001).
error rate is mentioned in 36 sentences in this paper.
Topics mentioned in this paper:
Kaufmann, Tobias and Pfister, Beat
Abstract
We report a significant reduction in word error rate compared to a state-of-the-art baseline system.
Experiments
For a given test set we could then compare the word error rate of the baseline system with that of the extended system employing the grammar-based language model.
Experiments
tionally high baseline word error rate .
Experiments
These classes are interviews (a word error rate of 36.1%), sports reports (28.4%) and press conferences (25.7%).
Language Model 2.1 The General Approach
The influence of N on the word error rate is discussed in the results section.
error rate is mentioned in 15 sentences in this paper.
Topics mentioned in this paper:
Talbot, David and Brants, Thorsten
Conclusions
Experiments have shown that this randomized language model can be combined with entropy pruning to achieve further memory reductions; that error rates occurring in practice are much lower than those predicted by theoretical analysis due to the use of runtime sanity checks; and that the same translation quality as a lossless language model representation can be achieved when using 12 ‘error’ bits, resulting in approx.
Experiments
Section (3) analyzed the theoretical error rate; here, we measure error rates in practice when retrieving n-grams for approx.
Experiments
The error rates for bigrams are close to their expected values.
Perfect Hash-based Language Models
There is a tradeoff between space and error rate since the larger B is, the lower the probability of a false positive.
Perfect Hash-based Language Models
For example, if |V| is 128 then taking B = 1024 gives an error rate of e = 128/1024 = 0.125 with each entry in A using [log2 1024] = 10 bits.
Perfect Hash-based Language Models
Querying each of these arrays for each n-gram requested would be inefficient and inflate the error rate since a false positive could occur on each individual array.
Scaling Language Models
The space required in such a lossy encoding depends only on the range of values associated with the n-grams and the desired error rate , i.e.
error rate is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Bisani, Maximilian and Vozila, Paul and Divay, Olivier and Adams, Jeff
Experimental evaluation
It is hard to quote the verbatim word error rate of the recognizer, because this would require a careful and time-consuming manual transcription of the test set.
Experimental evaluation
Using the alignment we compute precision and recall for sections headings and punctuation marks as well as the overall token error rate .
Experimental evaluation
It should be noted that the so derived error rate is not comparable to word error rates usually reported in speech recognition research.
Probabilistic model
The decision rule (1) minimizes the document error rate .
Transformation based learning
This method iteratively improves the match (as measured by token error rate ) of a collection of corresponding source and target token sequences by positing and applying a sequence of substitution rules.
error rate is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Ganchev, Kuzman and Graça, João V. and Taskar, Ben
Introduction
Fraser and Marcu (2007) note that none of the tens of papers published over the last five years has shown that significant decreases in alignment error rate (AER) result in significant increases in translation performance.
Introduction
After presenting the models and the algorithm in Sections 2 and 3, in Section 4 we examine how the new alignments differ from standard models, and find that the new method consistently improves word alignment performance, measured either as alignment error rate or weighted F—score.
Word alignment results
(2008) show that alignment error rate (Och and Ney, 2003) can be improved with agreement constraints.
error rate is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: