Index of papers in Proc. ACL 2013 that mention
  • translation tasks
liu, lemao and Watanabe, Taro and Sumita, Eiichiro and Zhao, Tiejun
Abstract
Our model outperforms the log-linear translation models with/without embedding features on Chinese-to-English and J apanese-to-English translation tasks .
Introduction
On both Chinese-to-English and J apanese-to-English translation tasks , experiment results show that our model can leverage the shortcomings suffered by the log-linear model, and thus achieves significant improvements over the log-linear based translation.
Introduction
We conduct our experiments on the Chinese-to-English and J apanese-to-English translation tasks .
Introduction
Although there are serious overlaps between h and h’ for AdNN-Hiero-D which may limit its generalization abilities, as shown in Table 3, it is still comparable to L—Hiero on the J apanese-to-English task, and significantly outperforms L-Hiero on the Chinese-to-English translation task .
translation tasks is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Cohn, Trevor and Haffari, Gholamreza
Analysis
Our experiments on Urdu-English, Arabic-English, and Farsi-English translation tasks all demonstrate improvements over competitive baseline systems.
Experiments
The corpora statistics of these translation tasks are summarised in Table 2.
Experiments
The time complexity of our inference algorithm is 0(n6), which can be prohibitive for large scale machine translation tasks .
Experiments
Table 3 shows the BLEU scores for the three translation tasks UR/AlUFA—>EN based on our method against the baselines.
Introduction
Moreover our approach results in consistent translation improvements across a number of translation tasks compared to Neubig et al.’s method, and a competitive phrase-based baseline.
translation tasks is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Braune, Fabienne and Seemann, Nina and Quernheim, Daniel and Maletti, Andreas
Abstract
We perform a large-scale empirical evaluation of our obtained system, which demonstrates that we significantly beat a realistic tree-to-tree baseline on the WMT 2009 English —> German translation task .
Conclusion and Future Work
We demonstrated that our EMBOT-based machine translation system beats a standard tree-to-tree system (Moses tree-to-tree) on the WMT 2009 translation task English —> German.
Experiments
The compared systems are evaluated on the English-to-German13 news translation task of WMT 2009 (Callison-Burch et al., 2009).
Introduction
We evaluate our new system on the WMT 2009 shared translation task English —> German.
translation tasks is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Setiawan, Hendra and Zhou, Bowen and Xiang, Bing and Shen, Libin
Abstract
We integrate our proposed model into a state-of-the-art string-to-dependency translation system and demonstrate the efficacy of our proposal in a large-scale Chinese-to-English translation task .
Conclusion
In a large scale Chinese-to-English translation task , we achieve a significant improvement over a strong baseline.
Introduction
We show the efficacy of our proposal in a large-scale Chinese-to-English translation task where the introduction of our TNO model provides a significant gain over a state-of-the-art string-to-dependency SMT system (Shen et al., 2008) that we enhance with additional state-of-the-art features.
Maximal Orientation Span
Here, we would like to point out that even in this simple example where all local decisions are made accurate, this ambiguity occurs and it would occur even more so in the real translation task where local decisions may be highly inaccurate.
translation tasks is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Feng, Minwei and Peter, Jan-Thorsten and Ney, Hermann
Comparative Study
(Wang et al., 2007) present a pre-reordering method for Chinese-English translation task .
Conclusion
The CRFs achieves lower error rate on the tagging task but RNN trained model is better for the translation task .
Conclusion
However, the tree-based jump model relies on manually designed reordering rules which does not exist for many language pairs while our model can be easily adapted to other translation tasks .
translation tasks is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kauchak, David
Abstract
Unlike some text-to-text translation tasks, text simplification is a monolingual translation task allowing for text in both the input and output domain to be used for training the language model.
Introduction
text compression, text simplification and summarization) can be viewed as monolingual translation tasks , translating between text variations within a single language.
Introduction
This is not the case for all monolingual translation tasks (Knight and Marcu, 2002; Cohn and Lapata, 2009).
translation tasks is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: