Abstract | Experimental results on the NIST MT-2003 Chinese-English translation task show that our method statistically significantly outperforms the four baseline systems. |
Experiment | 1) FTS2S significantly outperforms (p<0.05) FT2S. |
Experiment | 3) Our model statistically significantly outperforms all the baselines system. |
Experiment | 4) All the four syntax-based systems show better performance than Moses and three of them significantly outperforms (p<0.05) Moses. |
Introduction | Experimental results show that our method significantly outperforms the two individual methods and other baseline methods. |
Experiments | As shown in the table, the multi-parameter model improves by approximately 18% and 12% on the TREC-6 and 7 partial query sets, and it also significantly outperforms both the word model and the one-parameter model on the TREC-8 query set. |
Experiments | For both opposite cases, the multi—parameter model significantly outperforms one—parameter model. |
Experiments | Note that the multi—parameter model significantly outperforms the one—parameter model and all manually—set As for the queries ‘declining birth rate’ and ‘Amazon rain forest’, which also has one effective phrase, ‘rain forest’, and one noneffective phrase, ‘Amazon forest’. |
Conclusion | EXperiments show that our model achieves substantial improvements over baseline and significantly outperforms (Marton and Resnik, 2008)’s XP+. |
Experiments | The binary SDB (BiSDB) model statistically significantly outperforms Marton and Resnik’s XP+ by an absolute improvement of 0.59 (relatively 2%). |
Introduction | Our experimental results display that our SDB model achieves a substantial improvement over the baseline and significantly outperforms XP+ according to the BLEU metric (Papineni et al., 2002). |