Index of papers in Proc. ACL 2009 that mention
  • translation model
Bernhard, Delphine and Gurevych, Iryna
Abstract
They can be obtained by training statistical translation models on parallel monolingual corpora, such as question-answer pairs, where answers act as the “source” language and questions as the “target” language.
Abstract
We compare monolingual translation models built from lexical semantic resources with two other kinds of datasets: manually-tagged question reformulations and question-answer pairs.
Introduction
Berger and Lafferty (1999) have formulated a further solution to the lexical gap problem consisting in integrating monolingual statistical translation models in the retrieval process.
Introduction
Monolingual translation models encode statistical word associations which are trained on parallel monolingual corpora.
Introduction
While collection-specific translation models effectively encode statistical word associations for the target document collection, it also introduces a bias in the evaluation and makes it difficult to assess the quality of the translation model per se, independently from a specific task and document collection.
translation model is mentioned in 29 sentences in this paper.
Topics mentioned in this paper:
Wu, Hua and Wang, Haifeng
Experiments
For the synthetic method, we used the ES translation model to translate the English part of the CE corpus to Spanish to construct a synthetic corpus.
Experiments
And we also used the BTEC CEl corpus to build a EC translation model to translate the English part of ES corpus into Chinese.
Experiments
Then we combined these two synthetic corpora to build a Chinese-Spanish translation model .
Introduction
It multiples corresponding translation probabilities and lexical weights in source-pivot and pivot-target translation models to induce a new source-target phrase table.
Introduction
For example, we can obtain a source-pivot corpus by translating the pivot sentence in the source-pivot corpus into the target language with pivot-target translation models .
Pivot Methods for Phrase-based SMT
Following the method described in Wu and Wang (2007), we train the source-pivot and pivot-target translation models using the source-pivot and pivot-target corpora, respectively.
Pivot Methods for Phrase-based SMT
Based on these two models, we induce a source-target translation model , in which two important elements need to be induced: phrase translation probability and lexical weight.
Pivot Methods for Phrase-based SMT
Then we build a source-target translation model using this corpus.
Using RBMT Systems for Pivot Translation
The source-target pairs extracted from this synthetic multilingual corpus can be used to build a source-target translation model .
translation model is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Zhang, Hui and Zhang, Min and Li, Haizhou and Aw, Aiti and Tan, Chew Lim
Abstract
This paper proposes a forest-based tree sequence to string translation model for syntax-based statistical machine translation, which automatically learns tree sequence to string translation rules from word-aligned source-side-parsed bilingual texts.
Abstract
The proposed model leverages on the strengths of both tree sequence-based and forest-based translation models .
Decoding
4) Decode the translation forest using our translation model and a dynamic search algorithm.
Forest-based tree sequence to string model
3.3 Forest-based tree-sequence to string translation model
Forest-based tree sequence to string model
Given a source forest F and target translation T S as well as word alignment A, our translation model is formulated as:
Introduction
to String Translation Model
Introduction
Section 2 describes related work while section 3 defines our translation model .
Related work
Motivated by the fact that non-syntactic phrases make nontrivial contribution to phrase-based SMT, the tree sequence-based translation model is proposed (Liu et al., 2007; Zhang et al., 2008a) that uses tree sequence as the basic translation unit, rather than using single subtree as in the STSG.
Related work
(2007) propose the tree sequence concept and design a tree sequence to string translation model .
Related work
(2008a) propose a tree sequence-based tree to tree translation model and Zhang et al.
translation model is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Liu, Yang and Mi, Haitao and Feng, Yang and Liu, Qun
Abstract
Current SMT systems usually decode with single translation models and cannot benefit from the strengths of other models in decoding phase.
Abstract
We instead propose joint decoding, a method that combines multiple translation models in one decoder.
Conclusion
We have presented a framework for including multiple translation models in one decoder.
Introduction
In this paper, we propose a framework for combining multiple translation models directly in de-
Joint Decoding
Second, translation models differ in decoding algorithms.
Joint Decoding
Despite the diversity of translation models , they all have to produce partial translations for substrings of input sentences.
Joint Decoding
Therefore, we represent the search space of a translation model as a structure called translation hypergraph.
translation model is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Sun, Jun and Zhang, Min and Tan, Chew Lim
Abstract
The tree sequence based translation model allows the violation of syntactic boundaries in a rule to capture non-syntactic phrases, where a tree sequence is a contiguous sequence of sub-trees.
Abstract
This paper goes further to present a translation model based on noncontiguous tree sequence alignment, where a noncontiguous tree sequence is a sequence of sub-trees and gaps.
Experiments
In the experiments, we train the translation model on FBIS corpus (7.2M (Chinese) + 9.2M (English) words) and train a 4-gram language model on the Xinhua portion of the English Gigaword corpus (181M words) using the SRILM Toolkits (Stolcke,
Introduction
Bod (2007) also finds that discontinues phrasal rules make significant improvement in linguistically motivated STSG-based translation model .
Introduction
2 We illustrate the rule extraction with an example from the tree-to-tree translation model based on tree sequence alignment (Zhang et al, 2008a) without losing of generality to most syntactic tree based models.
Introduction
To address this issue, we propose a syntactic translation model based on noncontiguous tree sequence alignment.
NonContiguous Tree sequence Align-ment-based Model
In this section, we give a formal definition of SncTSSG and accordingly we propose the alignment based translation model .
NonContiguous Tree sequence Align-ment-based Model
2.2 SncTSSG based Translation Model
translation model is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Yang, Fan and Zhao, Jun and Liu, Kang
Experiments
In order to evaluate the influence of segmentation results upon the statistical ON translation system, we compare the results of two translation models .
Experiments
For constructing a statistical ON translation model , we use GIZA++I to align the Chinese NEs and the English NEs in the training set.
Experiments
Q2: the Chinese ON and the results of the statistical translation model .
Related Work
The first type of methods translates ONs by building a statistical translation model .
Related Work
The statistical translation model can give an output for any input.
The Chunking-based Segmentation for Chinese ONs
The performance of the statistical ON translation model is dependent on the precision of the Chinese ON segmentation to some extent.
translation model is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
DeNero, John and Chiang, David and Knight, Kevin
Computing Feature Expectations
The weight of h is the incremental score contributed to all translations containing the rule application, including translation model features on 7“ and language model features that depend on both 7“ and the English contexts of the child nodes.
Experimental Results
Hiero is a hierarchical system that expresses its translation model as a synchronous context-free grammar (Chiang, 2007).
Experimental Results
SBMT is a string-to-tree translation system with rich target-side syntactic information encoded in the translation model .
Experimental Results
Figure 3: N - grams with high expected count are more likely to appear in the reference translation that 71- grams in the translation model’s Viterbi translation, 6*.
translation model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Huang, Fei
Introduction
The quality of the parallel data and the word alignment have significant impacts on the learned translation models and ultimately the quality of translation output.
Sentence Alignment Confidence Measure
The source-to-target lexical translation model p(t|s) and target-to-source model p(s|t) can be obtained through IBM Model-l or HMM training.
Sentence Alignment Confidence Measure
For the efficient computation of the denominator, we use the lexical translation model .
translation model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Li, Zhifei and Eisner, Jason and Khudanpur, Sanjeev
Experimental Results
Our translation model was trained on about 1M parallel sentence pairs (about 28M words in each language), which are sub-sampled from corpora distributed by LDC for the NIST MT evaluation using a sampling method based on the n-gram matches between training and test sets in the foreign side.
Experimental Results
We use GIZA++ (Och and Ney, 2000), a suffix-array (Lopez, 2007), SRILM (Stol-cke, 2002), and risk-based deterministic annealing (Smith and Eisner, 2006)17 to obtain word alignments, translation models , language models, and the optimal weights for combining these models, respectively.
Variational vs. Min-Risk Decoding
However, suppose the hypergraph were very large (thanks to a large or smoothed translation model and weak pruning).
translation model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Xiong, Deyi and Zhang, Min and Aw, Aiti and Li, Haizhou
Analysis
We want to further study the happenings after we integrate the constraint feature (our SDB model and Marton and Resnik’s XP+) into the log-linear translation model .
Experiments
All translation models were trained on the FBIS corpus.
The Syntax-Driven Bracketing Model 3.1 The Model
new feature into the log-linear translation model : PSDB (b|T, This feature is computed by the SDB model described in equation (3) or equation (4), which estimates a probability that a source span is to be translated as a unit within particular syntactic contexts.
translation model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: