Index of papers in Proc. ACL 2009 that mention
  • translation task
Haffari, Gholamreza and Sarkar, Anoop
AL-SMT: Multilingual Setting
The nonnegative weights ad reflect the importance of the different translation tasks and 2d ad 2 l. AL-SMT formulation for single language pair is a special case of this formulation where only one of the ad’s in the objective function (1) is one and the rest are zero.
Experiments
For the multilingual experiments (which involve four source languages) we set ad 2 .25 to make the importance of individual translation tasks equal.
Experiments
Having noticed that Model 1 with ELPR performs well in the single language pair setting, we use it to rank entries for indiVidual translation tasks .
Experiments
Figure 5: The log-log Zipf plots representing the true and estimated probabilities of a (source) phrase vs the rank of that phrase in the German to English translation task .
Introduction
In our case, the multiple tasks are individual machine translation tasks for several language pairs.
Sentence Selection: Multiple Language Pairs
Using this method, we rank the entries in unlabeled data U for each translation task defined by language pair (Fd, This results in several ranking lists, each of which represents the importance of entries with respect to a particular translation task .
Sentence Selection: Multiple Language Pairs
is the ranking of a sentence in the list for the dth translation task (Reichart et al., 2008).
translation task is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Li, Mu and Duan, Nan and Zhang, Dongdong and Li, Chi-Ho and Zhou, Ming
Abstract
Experimental results on data sets for NIST Chinese-to—English machine translation task show that the co-decoding method can bring significant improvements to all baseline decoders, and the outputs from co-decoding can be used to further improve the result of system combination.
Experiments
We conduct our experiments on the test data from the NIST 2005 and NIST 2008 Chinese-to-English machine translation tasks .
Experiments
We use the parallel data available for the NIST 2008 constrained track of Chinese-to-English machine translation task as bilingual training data, which contains 5.1M sentence pairs, 128M Chinese words and 147M English words after preprocessing.
Experiments
We run two iterations of decoding for each member decoder, and hold the value of a in Equation 5 as a constant 0.05, which is tuned on the test data of NIST 2004 Chinese-to-English machine translation task .
Introduction
We will present experimental results on the data sets of NIST Chinese-to-English machine translation task , and demonstrate that co-decoding can bring significant improvements to baseline systems.
translation task is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Zhang, Hui and Zhang, Min and Li, Haizhou and Aw, Aiti and Tan, Chew Lim
Abstract
Experimental results on the NIST MT-2003 Chinese-English translation task show that our method statistically significantly outperforms the four baseline systems.
Conclusion
Finally, we examine our methods on the FBIS corpus and the NIST MT-2003 Chinese-English translation task .
Experiment
We evaluate our method on Chinese-English translation task .
Introduction
We evaluate our method on the NIST MT-2003 Chinese-English translation tasks .
translation task is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Sun, Jun and Zhang, Min and Tan, Chew Lim
Abstract
Experimental results on the NIST MT-05 Chi-nese-English translation task show that the proposed model statistically significantly outperforms the baseline systems.
Conclusions and Future Work
We also find that in Chinese-English translation task , gaps are more effective in Chinese side than in the English side.
Experiments
6&7 as well as Exp 3&4, shows that non-contiguity in the target side in Chinese-English translation task is not so useful as that in the source side when constructing the noncontiguous phrasal rules.
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zaslavskiy, Mikhail and Dymetman, Marc and Cancedda, Nicola
Experiments
But in this particular “translation task” from bad to good English, we consider that all “biphrases” are of the form 6 — e, where e is an English word, and we do not take into account any distortion: we only consider the quality of the permutation as it is measured by the LM component.
Experiments
In this section we consider two real translation tasks , namely, translation from English to French, trained on Europarl (Koehn et al., 2003) and translation from German to Spanish training on the NewsCommentary corpus.
Experiments
Since in the real translation task , the size of the TSP graph is much larger than in the artificial reordering task (in our experiments the median size of the TSP graph was around 400 nodes, sometimes growing up to 2000 nodes), directly applying the exact TSP solver would take too long; instead we use the approximate LK algorithm and compare it to Beam-Search.
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: