Abstract | Experiments on a Chinese to English translation task show that our proposed RZNN can outperform the state-of-the-art baseline by about 1.5 points in BLEU. |
Conclusion and Future Work | We conduct experiments on a Chinese-to-English translation task , and our method outperforms a state-of-the-art baseline about 1.5 points BLEU. |
Experiments and Results | In this section, we conduct experiments to test our method on a Chinese-to-English translation task . |
Experiments and Results | And also, translation task is difference from other NLP tasks, that, it is more important to model the translation confidence directly (the confidence of one |
Introduction | We conduct experiments on a Chinese-to-English translation task to test our proposed methods, and we get about 1.5 BLEU points improvement, compared with a state-of-the-art baseline system. |
Abstract | The RNN-based model outperforms the feed-forward neural network-based model (Yang et al., 2013) as well as the IBM Model 4 under Japanese-English and French-English word alignment tasks, and achieves comparable translation performance to those baselines for Japanese-English and Chinese-English translation tasks . |
Introduction | This paper presents evaluations of Japanese-English and French-English word alignment tasks and Japanese-to-English and Chinese-to-English translation tasks . |
Introduction | For the translation tasks , our model achieves up to 0.74% gain in BLEU as compared to the FFNN-based model, which matches the translation qualities of the IBM Model 4. |
Training | In addition, we evaluated the end-to-end translation performance of three tasks: a Chinese-to-English translation task with the FBIS corpus (FBI 8), the IWSLT 2007 Japanese-to-English translation task (I WSLT) (Fordyce, 2007), and the NTCIR-9 Japanese-to-English patent translation task (NTCIR) (Goto et a1., 2011)? |
Training | In the translation tasks , we used the Moses phrase-based SMT systems (Koehn et al., 2007). |
Abstract | Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. |
Experiments | We evaluate the performance of our neural network based topic similarity model on a Chinese-to-English machine translation task . |
Introduction | We integrate topic similarity features in the log-linear model and evaluate the performance on the NIST Chinese-to-English translation task . |
Abstract | We evaluate our model on a Chinese to English translation task and obtain up to 1.2 BLEU improvement over strong baselines. |
Conclusion | This paper contributes to the deeper integration of topic models into critical applications by presenting a new multilingual topic model, ptLDA, comparing it with other multilingual topic models on a machine translation task , and showing that these topic models improve machine translation. |
Inference | We explore multiple inference schemes because while all of these methods optimize likelihood because they might give different results on the translation task . |
Abstract | We test the effectiveness of the proposed sense-based translation model on a large-scale Chinese-to-English translation task . |
Introduction | They show that such a reformulated WSD can improve the accuracy of a simplified word translation task . |
Introduction | Section 5 elaborates our experiments on the large-scale Chinese-to-English translation task . |
Crowdsourcing Translation | 52 different Turkers took part in the translation task , each translating 138 sentences on average. |
Evaluation | This suggests that both sources of information— the candidate itself and its authors— are important for the crowdsourcing translation task . |
Problem Formulation | The problem definition of the crowdsourcing translation task is straightforward: given a set of candidate translations for a source sentence, we want to choose the best output translation. |