Index of papers in Proc. ACL 2009 that mention
  • error rate
Liu, Yang and Mi, Haitao and Feng, Yang and Liu, Qun
Experiments
One possible reason is that we only used n-best derivations instead of all possible derivations for minimum error rate training.
Extended Minimum Error Rate Training
Minimum error rate training (Och, 2003) is widely used to optimize feature weights for a linear model (Och and Ney, 2002).
Extended Minimum Error Rate Training
The key idea of MERT is to tune one feature weight to minimize error rate each time while keep others fixed.
Extended Minimum Error Rate Training
Unfortunately, minimum error rate training cannot be directly used to optimize feature weights of max-translation decoding because Eq.
Introduction
0 As multiple derivations are used for finding optimal translations, we extend the minimum error rate training (MERT) algorithm (Och, 2003) to tune feature weights with respect to BLEU score for max-translation decoding (Section 4).
error rate is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Jeon, Je Hun and Liu, Yang
Experiments and results
Table 2: Percentage of positive samples, and averaged error rate for positive (P) and negative (N) samples for the first 20 iterations using the agreement-based and our confidence labeling methods.
Experiments and results
Table 2 shows the percentage of the positive samples added for the first 20 iterations, and the average labeling error rate of those samples for the self-labeled positive and negative classes for two methods.
Experiments and results
The agreement-based random selection added more negative samples that also have higher error rate than the positive samples.
error rate is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhu, Xiaodan and Penn, Gerald and Rudzicz, Frank
Abstract
We compare the performance of this model with that achieved using manual and automatic transcripts, and find that this new approach is roughly equivalent to having access to ASR transcripts with word error rates in the 33—37% range without actually having to do the ASR, plus it better handles utterances with out-of-vocabulary words.
Experimental results
Since ASR performance can vary greatly as we discussed above, we compare our system against automatic transcripts having word error rates of 12.6%, 20.9%, 29.2%, and 35.5% on the same speech source.
Experimental results
I a: - ., 0.75 0.75 0.7 0.7 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 Word error rate Word error rate 1 1 Len=20%, Rand=0.324 Len=2o%, Rand=o_340 0.95 0.95 N I 0.9 0.9 e L’" D 0.85 o 0.85 g 0.8 8 0.8 v a: 0.75 0.75 0.7 0.7 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 Word error rate Word error rate 1 1 Len=30‘V , Rand=0.389 _ _ 0 95 0 095 Len—30%, Rand—0.402 ES N I 0.9 0.9 e L’" D 0.85 o 0.85 8 0.8 8 0.8 a: 0 75 0 75 0.7 0.7 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 Word error rate Word error rate
Experimental setup
These transcripts contain a word error rate of 12.6%, which is comparable to the best accuracies obtained in the literature on this data set.
error rate is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Kumar, Shankar and Macherey, Wolfgang and Dyer, Chris and Och, Franz
Abstract
Minimum Error Rate Training (MERT) and Minimum Bayes-Risk (MBR) decoding are used in most current state-of—the-art Statistical Machine Translation (SMT) systems.
Introduction
Two popular techniques that incorporate the error criterion are Minimum Error Rate Training (MERT) (Och, 2003) and Minimum Bayes-Risk (MBR) decoding (Kumar and Byrne, 2004).
Minimum Bayes-Risk Decoding
This reranking can be done for any sentence-level loss function such as BLEU (Papineni et al., 2001), Word Error Rate, or Position-independent Error Rate .
error rate is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Merlo, Paola and van der Plas, Lonneke
Amount of Information in Semantic Roles Inventory
Table 2: Percent Error rate reduction (ERR) across role labelling sets in three tasks in Zapirain et al.
Amount of Information in Semantic Roles Inventory
(2008) and calculate the reduction in error rate based on this differential baseline for the two annotation schemes.
Amount of Information in Semantic Roles Inventory
VerbNet has better role generalising ability overall as its reduction in error rate is greater than PropBank (first line of Table 2), but it is more degraded by lack of verb information (second and third lines of Table 2).
error rate is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: