Abstract | We present a method to transliterate names in the framework of end-to-end statistical machine translation. |
End-to-End results | Finally, here are end-to-end machine translation results for three sentences, with and without the transliteration module, along with a human reference translation. |
Evaluation | In the result section of this paper, we will use the NEWA metric to measure and compare the accuracy of NE translations in our end-to-end SMT translations and four human reference translations. |
Introduction | The task of transliterating names (independent of end-to-end MT) has received a significant amount of research, e.g., (Knight and Graehl, 1997; Chen et al., 1998; Al-Onaizan, 2002). |
Introduction | Most of this work has been disconnected from end-to-end MT, a problem which we address head-on in this paper. |
Abstract | We present a generic phrase training algorithm which is parameterized with feature functions and can be optimized jointly with the translation engine to directly maximize the end-to-end system performance. |
Conclusions | It can be optimized jointly with the translation engine to directly maximize the end-to-end translation performance. |
Experimental Results | Since the translation engine implements a log-linear model, the discriminative training of feature weights in the decoder should be embedded in the whole end-to-end system jointly with the discriminative phrase table training process. |
Abstract | This preference for sparse solutions together with effective pruning methods forms a phrase alignment regimen that produces better end-to-end translations than standard word alignment approaches. |
Experiments | 7.2 End-to-end Evaluation |
Experiments | Given an unlimited amount of time, we would tune the prior to maximize end-to-end performance, using an objective function such as BLEU. |