Index of papers in Proc. ACL 2013 that mention
  • feature weights
Wang, Chenguang and Duan, Nan and Zhou, Ming and Zhang, Ming
Paraphrasing for Web Search
We utilize minimum error rate training (MERT) (Och, 2003) to optimize feature weights of the paraphrasing model according to NDCG.
Paraphrasing for Web Search
MERT is used to optimize feature weights of our linear-formed paraphrasing model.
Paraphrasing for Web Search
S X3184 = arg min{Z ETTaDz-Label, 62,-; A3184, M” i=1 The objective of MERT is to find the optimal feature weight vector Xi” that minimizes the error criterion Err according to the NDCG scores of top-l paraphrase candidates.
feature weights is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Perez-Rosas, Veronica and Mihalcea, Rada and Morency, Louis-Philippe
Discussion
Feature Weights
Discussion
Figure 2: Visual and acoustic feature weights .
Discussion
To determine the role played by each of the visual and acoustic features, we compare the feature weights assigned by the learning algorithm, as shown in Figure 2.
feature weights is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Xiang, Bing and Luo, Xiaoqiang and Zhou, Bowen
Experimental Results
Next we extract phrase pairs, Hiero rules and tree-to-string rules from the original word alignment and the improved word alignment, and tune all the feature weights on the tuning set.
Integrating Empty Categories in Machine Translation
The feature weights can be tuned on a tuning set in a log-linear model along with other usual features/costs, including language model scores, bi-direction translation probabilities, etc.
Related Work
First, in addition to the preprocessing of training data and inserting recovered empty categories, we implement sparse features to further boost the performance, and tune the feature weights directly towards maximizing the machine translation metric.
feature weights is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: