Index of papers in Proc. ACL 2009 that mention
  • feature weights
Liu, Yang and Mi, Haitao and Feng, Yang and Liu, Qun
Background
where hm is a feature function, Am is the associated feature weight , and Z (f) is a constant for normalization:
Conclusion
As our decoder accounts for multiple derivations, we extend the MERT algorithm to tune feature weights with respect to BLEU score for max-translation decoding.
Experiments
We found that accounting for all possible derivations in max-translation decoding resulted in a small negative effect on BLEU score (from 30.11 to 29.82), even though the feature weights were tuned with respect to BLEU score.
Experiments
We concatenate and normalize their feature weights for the joint decoder.
Extended Minimum Error Rate Training
Minimum error rate training (Och, 2003) is widely used to optimize feature weights for a linear model (Och and Ney, 2002).
Extended Minimum Error Rate Training
The key idea of MERT is to tune one feature weight to minimize error rate each time while keep others fixed.
Extended Minimum Error Rate Training
where a is the feature value of current dimension, cc is the feature weight being tuned, and b is the dotproduct of other dimensions.
Introduction
0 As multiple derivations are used for finding optimal translations, we extend the minimum error rate training (MERT) algorithm (Och, 2003) to tune feature weights with respect to BLEU score for max-translation decoding (Section 4).
feature weights is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Li, Mu and Duan, Nan and Zhang, Dongdong and Li, Chi-Ho and Zhou, Ming
Collaborative Decoding
Let 2m be the feature weight vector for member decoder dm, the training procedure proceeds as follows:
Collaborative Decoding
For each decoder dm, find a new feature weight vector 2;,1 which optimizes the specified evaluation criterion L on D using the MERT algorithm based on the n-best list Jim generated by dm:
Collaborative Decoding
where T denotes the translations selected by re-ranking the translations in Jim using a new feature weight vector A
feature weights is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Pado, Sebastian and Galley, Michel and Jurafsky, Dan and Manning, Christopher D.
Expt. 2: Predicting Pairwise Preferences
Feature Weights .
Expt. 2: Predicting Pairwise Preferences
Finally, we make two observations about feature weights in the RTER model.
Expt. 2: Predicting Pairwise Preferences
Second, good MT evaluation feature weights are not good weights for RTE.
feature weights is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: