Abstract | Our models improve translation quality over the single generic label approach of Chiang (2005) and perform on par with the syntactically motivated approach from Zollmann and Venugopal (2006) on the N IST large Chinese—to—English translation task. |
Conclusion and discussion | Evaluated on a Chinese-to-English translation task, our approach improves translation quality over a popular PSCFG baseline—the hierarchical model of Chiang (2005) —and performs on par |
Experiments | We evaluate our approach by comparing translation quality , as evaluated by the IBM-BLEU (Papineni et al., 2002) metric on the NIST Chinese-to-English translation task using MT04 as development set to train the model parameters A, and MTOS, MT06 and MT08 as test sets. |
Experiments | In line with previous findings for syntax-augmented grammars (Zollmann and V0-gel, 2010), the source-side-based grammar does not reach the translation quality of its target-based counterpart; however, the model still outperforms the hi- |
Experiments | (2008), the impact of these rules on translation quality is negligible. |
Introduction | Label-based approaches have resulted in improvements in translation quality over the single X label approach (Zollmann et al., 2008; Mi and Huang, 2008); however, all the works cited here rely on stochastic parsers that have been trained on manually created syntactic treebanks. |
Conclusions | by interpolating them with less sparse ones, could in the future lead to an additional increase in translation quality . |
Experiments | These extra features assess translation quality past the synchronous grammar derivation and learning general reordering or word emission preferences for the language pair. |
Introduction | By advancing from structures which mimic linguistic syntax, to learning linguistically aware latent recursive structures targeting translation, we achieve significant improvements in translation quality for 4 different language pairs in comparison with a strong hierarchical translation baseline. |
Related Work | Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality . |
Related Work | The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between 71- gram and syntactic LMs may hold promise on larger data sets. |
Related Work | By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality . |
Abstract | We propose a novel technique of learning how to transform the source parse trees to improve the translation qualities of syntax-based translation models using synchronous context-free grammars. |
Experiments | We use BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) to evaluate translation qualities . |
Introduction | We integrate the neighboring context of the subgraph in our transformation preference predictions, and this improve translation qualities further. |