Abstract | The minimum Bayes risk (MBR) decoding objective improves BLEU scores for machine translation output relative to the standard Viterbi objective of maximizing model score . |
Computing Feature Expectations | Translation forests compactly encode an exponential number of output translations for an input sentence, along with their model scores . |
Computing Feature Expectations | 3Decoder states can include additional information as well, such as local configurations for dependency language model scoring . |
Computing Feature Expectations | The n-gram language model score of e similarly decomposes over the h in e that produce n-grams. |
Experimental Results | Figure 4: Three translations of an example Arabic sentence: its human- generated reference, the translation with the highest model score under Hiero (Viterbi), and the translation chosen by forest-based consensus decoding. |
Abstract | This paper applies MST parsing to MT, and describes how it can be integrated into a phrase-based decoder to compute dependency language model scores . |
Dependency parsing for machine translation | While it seems that loopy graphs are undesirable when the goal is to obtain a syntactic analysis, that is not necessarily the case when one just needs a language modeling score . |
Machine translation experiments | We use the standard features implemented almost exactly as in Moses: four translation features (phrase-based translation probabilities and lexically-weighted probabilities), word penalty, phrase penalty, linear distortion, and language model score . |
Machine translation experiments | model score computed with the dependency parsing algorithm described in Section 2. |