Index of papers in Proc. ACL 2014 that mention
  • translation probability
Lo, Chi-kiu and Beloucif, Meriem and Saers, Markus and Wu, Dekai
Abstract
We show that cross-lingual XMEANT outperforms monolingual MEANT by (l) replacing the monolingual context vector model in MEANT with simple translation probabilities , and (2) incorporating bracketing ITG constraints.
Conclusion
This is (l) accomplished by replacing monolingual MEANT’s context vector model with simple translation probabilities when computing similarities of semantic role fillers, and (2) further improved by incorporating BITG constraints for aligning the tokens in semantic role fillers.
Introduction
XMEANT is obtained by (1) using simple lexical translation probabilities , instead of the monolingual context vector model used in MEANT for computing the semantic role fillers similarities, and (2) incorporating bracketing ITG constrains for word alignment within the semantic role fillers.
Introduction
We therefore propose XMEANT, a cross-lingual MT evaluation metric, that modifies MEANT using (1) simple translation probabilities (in our experiments,
Related Work
Apply the maximum weighted bipartite matching algorithm to align the semantic frames between the foreign input and MT output according to the lexical translation probabilities of the predicates.
Related Work
For each pair of the aligned frames, apply the maximum weighted bipartite matching algorithm to align the arguments between the foreign input and MT output according to the aggregated phrasal translation probabilities of the role fillers.
XMEANT: a cross-lingual MEANT
But whereas MEANT measures lexical similarity using a monolingual context vector model, XMEANT instead substitutes simple cross-lingual lexical translation probabilities .
XMEANT: a cross-lingual MEANT
To aggregate individual lexical translation probabilities into phrasal similarities between cross-lingual semantic role fillers, we compared two natural approaches to generalizing MEANT’s method of comparing semantic parses, as described below.
XMEANT: a cross-lingual MEANT
ing lexical translation probabilities within semantic role filler phrases.
translation probability is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Liu, Le and Hong, Yu and Liu, Hao and Wang, Xing and Yao, Jianmin
Experiments
Additionally, we adopt GIZA++ to get the word alignment of in-domain parallel data and form the word translation probability table.
Experiments
This table will be used to compute the translation probability of general-domain sentence pairs.
Introduction
Meanwhile, the translation model measures the translation probability of sentence pair, being used to verify the parallelism of the selected domain-relevant bitext.
Related Work
The reason is that a sentence pair contains a source language sentence and a target language sentence, while the existing methods are incapable of evaluating the mutual translation probability of sentence pair in the target domain.
Training Data Selection Methods
However, in this paper, we adopt the translation model to evaluate the translation probability of sentence pair and develop a simple but effective variant of translation model to rank the sentence pairs in the general-domain corpus.
Training Data Selection Methods
Where P(e| f) is the translation model, which is IBM Model 1 in this paper, it represents the translation probability of target language sentence e conditioned on source language sentence f. le and If are the number of words in sentence 6 and f respectively.
Training Data Selection Methods
t(ej|fi) is the translation probability of word 61- conditioned on word fiand is estimated from the small in-domain parallel data.
translation probability is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Lu, Shixiang and Chen, Zhenbiao and Xu, Bo
Input Features for DNN Feature Learning
Following (Maskey and Zhou, 2012), we use the following 4 phrase features of each phrase pair (Koehn et al., 2003) in the phrase table as the first type of input features, bidirectional phrase translation probability (P (6| f) and P (f |e)), bidirectional lexical weighting (Lem(e|f) and Lex(f|e)),
Input Features for DNN Feature Learning
where, p(ej|fi) and p(fi|ej) represents bidirectional word translation probability .
Introduction
First, the input original features for the DBN feature learning are too simple, the limited 4 phrase features of each phrase pair, such as bidirectional phrase translation probability and bidirectional lexical weighting (Koehn et al., 2003), which are a bottleneck for learning effective feature representation.
Related Work
(2012) improved translation quality of n-gram translation model by using a bilingual neural LM, where translation probabilities are estimated using a continuous representation of translation units in lieu of standard discrete representations.
translation probability is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhang, Jiajun and Liu, Shujie and Li, Mu and Zhou, Ming and Zong, Chengqing
Experiments
Besides using the semantic similarities to prune the phrase table, we also employ them as two informative features like the phrase translation probability to guide translation hypotheses selection during decoding.
Experiments
Typically, four translation probabilities are adopted in the phrase-based SMT, including phrase translation probability and lexical weights in both directions.
Experiments
The phrase translation probability is based on co-occurrence statistics and the lexical weights consider the phrase as bag-of-W0rds.
translation probability is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Cui, Lei and Zhang, Dongdong and Liu, Shujie and Chen, Qiming and Li, Mu and Zhou, Ming and Yang, Muyun
Experiments
Although the translation probability of “send X” is much higher, it is inappropriate in this context since it is usually used in IT texts.
Topic Similarity Model with Neural Network
where the translation probability is given by:
Topic Similarity Model with Neural Network
Standard features: Translation model, including translation probabilities and lexical weights for both directions (4 features), 5-gram language model (1 feature), word count (1 feature), phrase count (1 feature), NULL penalty (1 feature), number of hierarchical rules used (1 feature).
translation probability is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Huang, Fei and Xu, Jian-Ming and Ittycheriah, Abraham and Roukos, Salim
Static MT Quality Estimation
0 17 decoding features, including phrase translation probabilities (source-to-target and target-to-source), word translation probabilities (also in both directions), maxent prob-abilitiesl, word count, phrase count, distor-
Static MT Quality Estimation
1The maxent probability is the translation probability
Static MT Quality Estimation
0 The average translation probability of the phrase translation pairs in the final translation, which provides the overall translation quality on the phrase level.
translation probability is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: