Index of papers in Proc. ACL 2008 that mention
  • translation model
Zhang, Min and Jiang, Hongfei and Aw, Aiti and Li, Haizhou and Tan, Chew Lim and Li, Sheng
Abstract
This paper presents a translation model that is based on tree sequence alignment, where a tree sequence refers to a single sequence of sub-trees that covers a phrase.
Introduction
3d Tree-to-Tree Translation Model
Introduction
In this paper, we propose a tree-to-tree translation model that is based on tree sequence alignment.
Related Work
Ding and Palmer (2005) propose a syntax-based translation model based on a probabilistic synchronous dependency insertion grammar.
Related Work
(2005) propose a dependency treelet-based translation model .
Related Work
(2007b) present a STSG-based tree-to-tree translation model .
Tree Sequence Alignment Model
3.2 Tree Sequence Translation Model
Tree Sequence Alignment Model
sequence-to-tree sequence translation model is formulated as:
translation model is mentioned in 18 sentences in this paper.
Topics mentioned in this paper:
Surdeanu, Mihai and Ciaramita, Massimiliano and Zaragoza, Hugo
Approach
supervised IR models, the answer ranking is implemented using discriminative learning, and finally, some of the ranking features are produced by question-to-answer translation models , which use class-conditional learning.
Approach
the similarity between questions and answers (FGl), features that encode question-to-answer transformations using a translation model (FG2), features that measure keyword density and frequency (FG3), and features that measure the correlation between question-answer pairs and other collections (FG4).
Approach
One way to address this problem is to learn question-to-answer transformations using a translation model (Berger et al., 2000; Echihabi and Marcu, 2003; Soricut and Brill, 2006; Riezler et al., 2007).
Experiments
Our ranking model was tuned strictly on the development set (i.e., feature selection and parameters of the translation models ).
Experiments
to improve lexical matching and translation models .
Experiments
This indicates that, even though translation models are the most useful, it is worth exploring approaches that combine several strategies for answer ranking.
Related Work
In the QA literature, answer ranking for non-factoid questions has typically been performed by learning question-to-answer transformations, either using translation models (Berger et al., 2000; Soricut and Brill, 2006) or by exploiting the redundancy of the Web (Agichtein et al., 2001).
Related Work
On the other hand, our approach allows the learning of full transformations from question structures to answer structures using translation models applied to different text representations.
translation model is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Avramidis, Eleftherios and Koehn, Philipp
Conclusion
Opposed to other factored translation model approaches that require target language factors, that are not easily obtainable for many languages, our approach only requires English syntax trees, which are acquired with widely available automatic parsers.
Factored Model
The factored statistical machine translation model uses a log-linear approach, in order to combine the several components, including the language model, the reordering model, the translation models and the generation models.
Introduction
Our method is based on factored phrase-based statistical machine translation models .
Introduction
Traditional statistical machine translation models deal with this problems in two ways:
Introduction
Then, contrary to the methods that added only output features or altered the generation procedure, we used this information in order to augment only the source side of a factored translation model , assuming that we do not have resources allowing factors or specialized generation in the target language (a common problem, when translating from English into under-resourced languages).
Methods for enriching input
Considering such annotation, a factored translation model is trained to map the word-case pair to the correct inflection of the target noun.
translation model is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Blunsom, Phil and Cohn, Trevor and Osborne, Miles
Abstract
We present a translation model which models derivations as a latent variable, in both training and decoding, and is fully discriminative and globally optimised.
Discriminative Synchronous Transduction
Our log-linear translation model defines a conditional probability distribution over the target translations of a given source sentence.
Evaluation
6We also experimented with using max-translation decoding for standard MER trained translation models , finding that it had a small negative impact on BLEU score.
Evaluation
Firstly we show the relative scores of our model against Hiero without using reverse translation or lexical features.7 This allows us to directly study the differences between the two translation models without the added complication of the other features.
Evaluation
As expected, the language model makes a significant difference to BLEU, however we believe that this effect is orthogonal to the choice of base translation model , thus we would expect a similar gain when integrating a language model into the discriminative system.
translation model is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Toutanova, Kristina and Suzuki, Hisami and Ruopp, Achim
Integration of inflection models with MT systems
In this method, stemming can impact word alignment in addition to the translation models .
MT performance results
From the results of Method 2 we can see that reducing sparsity at translation modeling is advantageous.
MT performance results
From these results, we also see that about half of the gain from using stemming in the base MT system came from improving word alignment, and half came from using translation models operating at the less sparse stem level.
Machine translation systems and data
The treelet translation model is estimated using a parallel corpus.
Related work
Though our motivation is similar to that of Koehn and Hoang (2007), we chose to build an independent component for inflection prediction in isolation rather than folding morphological information into the main translation model .
translation model is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Zhang, Hao and Gildea, Daniel
Abstract
We take a multi-pass approach to machine translation decoding when using synchronous context-free grammars as the translation model and n-gram language models: the first pass uses a bigram language model, and the resulting parse forest is used in the second pass to guide search with a trigram language model.
Experiments
The word-to-Word translation probabilities are from the translation model of IBM Model 4 trained on a 160-million-word English-Chinese parallel corpus using GIZA++.
Introduction
This complexity arises from the interaction of the tree-based translation model with an n-gram language model.
translation model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: