Index of papers in Proc. ACL that mention
  • translation task
liu, lemao and Watanabe, Taro and Sumita, Eiichiro and Zhao, Tiejun
Abstract
Our model outperforms the log-linear translation models with/without embedding features on Chinese-to-English and J apanese-to-English translation tasks .
Introduction
On both Chinese-to-English and J apanese-to-English translation tasks , experiment results show that our model can leverage the shortcomings suffered by the log-linear model, and thus achieves significant improvements over the log-linear based translation.
Introduction
We conduct our experiments on the Chinese-to-English and J apanese-to-English translation tasks .
Introduction
Although there are serious overlaps between h and h’ for AdNN-Hiero-D which may limit its generalization abilities, as shown in Table 3, it is still comparable to L—Hiero on the J apanese-to-English task, and significantly outperforms L-Hiero on the Chinese-to-English translation task .
translation task is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Haffari, Gholamreza and Sarkar, Anoop
AL-SMT: Multilingual Setting
The nonnegative weights ad reflect the importance of the different translation tasks and 2d ad 2 l. AL-SMT formulation for single language pair is a special case of this formulation where only one of the ad’s in the objective function (1) is one and the rest are zero.
Experiments
For the multilingual experiments (which involve four source languages) we set ad 2 .25 to make the importance of individual translation tasks equal.
Experiments
Having noticed that Model 1 with ELPR performs well in the single language pair setting, we use it to rank entries for indiVidual translation tasks .
Experiments
Figure 5: The log-log Zipf plots representing the true and estimated probabilities of a (source) phrase vs the rank of that phrase in the German to English translation task .
Introduction
In our case, the multiple tasks are individual machine translation tasks for several language pairs.
Sentence Selection: Multiple Language Pairs
Using this method, we rank the entries in unlabeled data U for each translation task defined by language pair (Fd, This results in several ranking lists, each of which represents the importance of entries with respect to a particular translation task .
Sentence Selection: Multiple Language Pairs
is the ranking of a sentence in the list for the dth translation task (Reichart et al., 2008).
translation task is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Li, Mu and Duan, Nan and Zhang, Dongdong and Li, Chi-Ho and Zhou, Ming
Abstract
Experimental results on data sets for NIST Chinese-to—English machine translation task show that the co-decoding method can bring significant improvements to all baseline decoders, and the outputs from co-decoding can be used to further improve the result of system combination.
Experiments
We conduct our experiments on the test data from the NIST 2005 and NIST 2008 Chinese-to-English machine translation tasks .
Experiments
We use the parallel data available for the NIST 2008 constrained track of Chinese-to-English machine translation task as bilingual training data, which contains 5.1M sentence pairs, 128M Chinese words and 147M English words after preprocessing.
Experiments
We run two iterations of decoding for each member decoder, and hold the value of a in Equation 5 as a constant 0.05, which is tuned on the test data of NIST 2004 Chinese-to-English machine translation task .
Introduction
We will present experimental results on the data sets of NIST Chinese-to-English machine translation task , and demonstrate that co-decoding can bring significant improvements to baseline systems.
translation task is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Cohn, Trevor and Haffari, Gholamreza
Analysis
Our experiments on Urdu-English, Arabic-English, and Farsi-English translation tasks all demonstrate improvements over competitive baseline systems.
Experiments
The corpora statistics of these translation tasks are summarised in Table 2.
Experiments
The time complexity of our inference algorithm is 0(n6), which can be prohibitive for large scale machine translation tasks .
Experiments
Table 3 shows the BLEU scores for the three translation tasks UR/AlUFA—>EN based on our method against the baselines.
Introduction
Moreover our approach results in consistent translation improvements across a number of translation tasks compared to Neubig et al.’s method, and a competitive phrase-based baseline.
translation task is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Tamura, Akihiro and Watanabe, Taro and Sumita, Eiichiro
Abstract
The RNN-based model outperforms the feed-forward neural network-based model (Yang et al., 2013) as well as the IBM Model 4 under Japanese-English and French-English word alignment tasks, and achieves comparable translation performance to those baselines for Japanese-English and Chinese-English translation tasks .
Introduction
This paper presents evaluations of Japanese-English and French-English word alignment tasks and Japanese-to-English and Chinese-to-English translation tasks .
Introduction
For the translation tasks , our model achieves up to 0.74% gain in BLEU as compared to the FFNN-based model, which matches the translation qualities of the IBM Model 4.
Training
In addition, we evaluated the end-to-end translation performance of three tasks: a Chinese-to-English translation task with the FBIS corpus (FBI 8), the IWSLT 2007 Japanese-to-English translation task (I WSLT) (Fordyce, 2007), and the NTCIR-9 Japanese-to-English patent translation task (NTCIR) (Goto et a1., 2011)?
Training
In the translation tasks , we used the Moses phrase-based SMT systems (Koehn et al., 2007).
translation task is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Liu, Shujie and Yang, Nan and Li, Mu and Zhou, Ming
Abstract
Experiments on a Chinese to English translation task show that our proposed RZNN can outperform the state-of-the-art baseline by about 1.5 points in BLEU.
Conclusion and Future Work
We conduct experiments on a Chinese-to-English translation task , and our method outperforms a state-of-the-art baseline about 1.5 points BLEU.
Experiments and Results
In this section, we conduct experiments to test our method on a Chinese-to-English translation task .
Experiments and Results
And also, translation task is difference from other NLP tasks, that, it is more important to model the translation confidence directly (the confidence of one
Introduction
We conduct experiments on a Chinese-to-English translation task to test our proposed methods, and we get about 1.5 BLEU points improvement, compared with a state-of-the-art baseline system.
translation task is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
He, Xiaodong and Deng, Li
Abstract
Our experimental results on this open-domain spoken language translation task show that the proposed method leads to significant translation performance improvement over a state-of-the-art baseline, and the system using the proposed method achieved the best single system translation result in the Chinese-to-English MT track.
Abstract
In the Chinese-to-English translation task , we are provided with human translated Chinese text with punctuations inserted.
Abstract
This is an open-domain spoken language translation task .
translation task is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Zhang, Hui and Zhang, Min and Li, Haizhou and Aw, Aiti and Tan, Chew Lim
Abstract
Experimental results on the NIST MT-2003 Chinese-English translation task show that our method statistically significantly outperforms the four baseline systems.
Conclusion
Finally, we examine our methods on the FBIS corpus and the NIST MT-2003 Chinese-English translation task .
Experiment
We evaluate our method on Chinese-English translation task .
Introduction
We evaluate our method on the NIST MT-2003 Chinese-English translation tasks .
translation task is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Xiong, Deyi and Zhang, Min and Li, Haizhou
SMT System
The translation task is on the official NIST Chinese-to-English evaluation data.
SMT System
Table 2 shows the corpora that we use for the translation task .
SMT System
Table 2: Training corpora for the translation task .
translation task is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Braune, Fabienne and Seemann, Nina and Quernheim, Daniel and Maletti, Andreas
Abstract
We perform a large-scale empirical evaluation of our obtained system, which demonstrates that we significantly beat a realistic tree-to-tree baseline on the WMT 2009 English —> German translation task .
Conclusion and Future Work
We demonstrated that our EMBOT-based machine translation system beats a standard tree-to-tree system (Moses tree-to-tree) on the WMT 2009 translation task English —> German.
Experiments
The compared systems are evaluated on the English-to-German13 news translation task of WMT 2009 (Callison-Burch et al., 2009).
Introduction
We evaluate our new system on the WMT 2009 shared translation task English —> German.
translation task is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Setiawan, Hendra and Zhou, Bowen and Xiang, Bing and Shen, Libin
Abstract
We integrate our proposed model into a state-of-the-art string-to-dependency translation system and demonstrate the efficacy of our proposal in a large-scale Chinese-to-English translation task .
Conclusion
In a large scale Chinese-to-English translation task , we achieve a significant improvement over a strong baseline.
Introduction
We show the efficacy of our proposal in a large-scale Chinese-to-English translation task where the introduction of our TNO model provides a significant gain over a state-of-the-art string-to-dependency SMT system (Shen et al., 2008) that we enhance with additional state-of-the-art features.
Maximal Orientation Span
Here, we would like to point out that even in this simple example where all local decisions are made accurate, this ambiguity occurs and it would occur even more so in the real translation task where local decisions may be highly inaccurate.
translation task is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zollmann, Andreas and Vogel, Stephan
Abstract
Our models improve translation quality over the single generic label approach of Chiang (2005) and perform on par with the syntactically motivated approach from Zollmann and Venugopal (2006) on the N IST large Chinese—to—English translation task .
Conclusion and discussion
Evaluated on a Chinese-to-English translation task , our approach improves translation quality over a popular PSCFG baseline—the hierarchical model of Chiang (2005) —and performs on par
Experiments
We evaluate our approach by comparing translation quality, as evaluated by the IBM-BLEU (Papineni et al., 2002) metric on the NIST Chinese-to-English translation task using MT04 as development set to train the model parameters A, and MTOS, MT06 and MT08 as test sets.
Introduction
Since the number of classes is a parameter of the clustering method and the resulting nonterminal size of our grammar is a function of the number of word classes, the PSCFG grammar complexity can be adjusted to the specific translation task at hand.
translation task is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Uszkoreit, Jakob and Brants, Thorsten
Conclusion
The experiments presented show that predictive class-based models trained using the obtained word classifications can improve the quality of a state-of-the-art machine translation system as indicated by the BLEU score in both translation tasks .
Experiments
Instead we report BLEU scores (Papineni et al., 2002) of the machine translation system using different combinations of word- and class-based models for translation tasks from English to Arabic and Arabic to English.
Experiments
Table 1 shows the BLEU scores reached by the translation system when combining the different class-based models with the word-based model in comparison to the BLEU scores by a system using only the word-based model on the Arabic-English translation task .
Experiments
For our experiment with the English Arabic translation task we trained two 5-gram predictive class-based models with 512 clusters on the Arabic a'rng'gawO’r’d and a'rL’webne’ws data sets.
translation task is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Hu, Yuening and Zhai, Ke and Eidelman, Vladimir and Boyd-Graber, Jordan
Abstract
We evaluate our model on a Chinese to English translation task and obtain up to 1.2 BLEU improvement over strong baselines.
Conclusion
This paper contributes to the deeper integration of topic models into critical applications by presenting a new multilingual topic model, ptLDA, comparing it with other multilingual topic models on a machine translation task , and showing that these topic models improve machine translation.
Inference
We explore multiple inference schemes because while all of these methods optimize likelihood because they might give different results on the translation task .
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Cui, Lei and Zhang, Dongdong and Liu, Shujie and Chen, Qiming and Li, Mu and Zhou, Ming and Yang, Muyun
Abstract
Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
Experiments
We evaluate the performance of our neural network based topic similarity model on a Chinese-to-English machine translation task .
Introduction
We integrate topic similarity features in the log-linear model and evaluate the performance on the NIST Chinese-to-English translation task .
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Xiong, Deyi and Zhang, Min and Li, Haizhou
Abstract
The two models are integrated into a state-of-the-art phrase-based machine translation system and evaluated on Chinese-to-English translation tasks with large-scale training data.
Conclusions and Future Work
The two models have been integrated into a phrase-based SMT system and evaluated on Chinese-to-English translation tasks using large-scale training data.
Experiments
In this section, we present our eXperiments on Chinese-to-English translation tasks , which are trained with large-scale data.
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Xiong, Deyi and Zhang, Min
Abstract
We test the effectiveness of the proposed sense-based translation model on a large-scale Chinese-to-English translation task .
Introduction
They show that such a reformulated WSD can improve the accuracy of a simplified word translation task .
Introduction
Section 5 elaborates our experiments on the large-scale Chinese-to-English translation task .
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Yan, Rui and Gao, Mingkun and Pavlick, Ellie and Callison-Burch, Chris
Crowdsourcing Translation
52 different Turkers took part in the translation task , each translating 138 sentences on average.
Evaluation
This suggests that both sources of information— the candidate itself and its authors— are important for the crowdsourcing translation task .
Problem Formulation
The problem definition of the crowdsourcing translation task is straightforward: given a set of candidate translations for a source sentence, we want to choose the best output translation.
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kauchak, David
Abstract
Unlike some text-to-text translation tasks, text simplification is a monolingual translation task allowing for text in both the input and output domain to be used for training the language model.
Introduction
text compression, text simplification and summarization) can be viewed as monolingual translation tasks , translating between text variations within a single language.
Introduction
This is not the case for all monolingual translation tasks (Knight and Marcu, 2002; Cohn and Lapata, 2009).
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Feng, Minwei and Peter, Jan-Thorsten and Ney, Hermann
Comparative Study
(Wang et al., 2007) present a pre-reordering method for Chinese-English translation task .
Conclusion
The CRFs achieves lower error rate on the tagging task but RNN trained model is better for the translation task .
Conclusion
However, the tree-based jump model relies on manually designed reordering rules which does not exist for many language pairs while our model can be easily adapted to other translation tasks .
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Yang, Nan and Li, Mu and Zhang, Dongdong and Yu, Nenghai
Abstract
We evaluated our approach on large-scale J apanese-English and English-Japanese machine translation tasks , and show that it can significantly outperform the baseline phrase-based SMT system.
Experiments
To test our ranking reorder model, we carry out eXperiments on large scale English-To-Japanese, and J apanese-To-English translation tasks .
Introduction
We evaluated our approach on large-scale J apanese-English and English-Japanese machine translation tasks , and experimental results show that our approach can bring significant improvements to the baseline phrase-based SMT system in both pre-ordering and integrated decoding settings.
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Neubig, Graham and Watanabe, Taro and Sumita, Eiichiro and Mori, Shinsuke and Kawahara, Tatsuya
Abstract
This allows for a completely probabilistic model that is able to create a phrase table that achieves competitive accuracy on phrase-based machine translation tasks directly from unaligned sentence pairs.
Experimental Evaluation
We evaluate the proposed method on translation tasks from four languages, French, German, Spanish, and Japanese, into English.
Experimental Evaluation
For Japanese, we use data from the NTCIR patent translation task (Fujii et al., 2008).
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Clifton, Ann and Sarkar, Anoop
Abstract
We show, using both automatic evaluation scores and linguistically motivated analyses of the output, that our methods outperform previously proposed ones and provide the best known results on the English-Finnish Europarl translation task .
Conclusion and Future Work
Using our proposed approach we obtain better scores than the state of the art on the English-Finnish translation task (Luong et al., 2010): from 14.82% BLEU to 15.09%, while using a
Translation and Morphology
Both of these approaches beat the state of the art on the English-Finnish translation task .
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Yamangil, Elif and Shieber, Stuart M.
Abstract
We describe our experiments with training algorithms for tree-to-tree synchronous tree-substitution grammar (STSG) for monolingual translation tasks such as sentence compression and paraphrasing.
Abstract
These translation tasks are characterized by the relative ability to commit to parallel parse trees and availability of word alignments, yet the unavailability of large-scale data, calling for a Bayesian tree-to-tree formalism.
Conclusion
Overall, we take these results as being encouraging for STSG induction via Bayesian nonparametrics for monolingual translation tasks .
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zaslavskiy, Mikhail and Dymetman, Marc and Cancedda, Nicola
Experiments
But in this particular “translation task” from bad to good English, we consider that all “biphrases” are of the form 6 — e, where e is an English word, and we do not take into account any distortion: we only consider the quality of the permutation as it is measured by the LM component.
Experiments
In this section we consider two real translation tasks , namely, translation from English to French, trained on Europarl (Koehn et al., 2003) and translation from German to Spanish training on the NewsCommentary corpus.
Experiments
Since in the real translation task , the size of the TSP graph is much larger than in the artificial reordering task (in our experiments the median size of the TSP graph was around 400 nodes, sometimes growing up to 2000 nodes), directly applying the exact TSP solver would take too long; instead we use the approximate LK algorithm and compare it to Beam-Search.
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Sun, Jun and Zhang, Min and Tan, Chew Lim
Abstract
Experimental results on the NIST MT-05 Chi-nese-English translation task show that the proposed model statistically significantly outperforms the baseline systems.
Conclusions and Future Work
We also find that in Chinese-English translation task , gaps are more effective in Chinese side than in the English side.
Experiments
6&7 as well as Exp 3&4, shows that non-contiguity in the target side in Chinese-English translation task is not so useful as that in the source side when constructing the noncontiguous phrasal rules.
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhang, Min and Jiang, Hongfei and Aw, Aiti and Li, Haizhou and Tan, Chew Lim and Li, Sheng
Abstract
Experimental results on the NIST MT-2005 Chinese-English translation task show that our method statistically significantly outperforms the baseline systems.
Conclusions and Future Work
The experimental results on the NIST MT-2005 Chinese-English translation task demonstrate the effectiveness of the proposed model.
Introduction
Experiment results on the NIST MT-2005 Chinese-English translation task show that our method significantly outperforms Moses (Koehn et al., 2007), a state-of-the-art phrase-based SMT system, and other linguistically syntax-based methods, such as SCFG-based and STSG-based methods (Zhang et al., 2007).
translation task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: