Index of papers in Proc. ACL 2012 that mention
  • word order
Chen, Boxing and Kuhn, Roland and Larkin, Samuel
BLEU and PORT
Word ordering measures for MT compare two permutations of the original source-language word sequence: the permutation represented by the sequence of corresponding words in the MT output, and the permutation in the reference.
BLEU and PORT
(10) and the word ordering measure v are combined in a harmonic mean: PORT: 2 (17) 1/Qmean(N) +1/v“ Here a is a free parameter that is tuned on held-out data.
Conclusions
PORT incorporates precision, recall, strict brevity penalty and strict redundancy penalty, plus a new word ordering measure v. As an evaluation metric, PORT performed better than BLEU at the system level and the segment level, and it was competitive with or slightly superior to METEOR at the segment level.
Experiments
Most WMT submissions involve language pairs with similar word order , so the ordering factor v in PORT won’t play a big role.
Experiments
PORT differs from BLEU partly in modeling long-distance reordering more accurately; English and French have similar word order , but the other two language pairs don’t.
Experiments
The results in section 3.3 (below) for Qmean, a version of PORT without word ordering factor v, suggest v may be defined suboptimally for French-English.
Introduction
This expression is then further combined with a new measure of word ordering , v, designed to reflect long-distance as well as short-distance word reordering (BLEU only reflects short-distance reordering).
word order is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Yang, Nan and Li, Mu and Zhang, Dongdong and Yu, Nenghai
Abstract
Previous work has shown using source syntactic trees is an effective way to tackle this problem between two languages with substantial word order difference.
Abstract
In this work, we further extend this line of exploration and propose a novel but simple approach, which utilizes a ranking model based on word order precedence in the target language to reposition nodes in the syntactic parse tree of a source sentence.
Integration into SMT system
Integrated method takes the original source sentence 6 as input, and ranking model generates a reordered e’ as a word order ref-
Integration into SMT system
Essentially they are soft constraints to encourage the decoder to choose translations with word order similar to the prediction of ranking reorder model.
Introduction
Long-distance word reordering between language pairs with substantial word order difference, such as Japanese with Subject-Object—Verb (SOV) structure and English with Subject-Verb-Object (SVO) structure, is generally viewed beyond the scope of the phrase-based systems discussed above, because of either distortion limits or lack of discriminative features for modeling.
Introduction
The other is called syntax pre-reordering — an approach that re-positions source words to approximate target language word order as much as possible based on the features from source syntactic parse trees.
Introduction
the word order in target language.
Ranking Model Training
For a sentence pair (6, f, a) with syntax tree Te on the source side, we need to determine which reordered tree Té, best represents the word order in target sentence f. For a tree node 75 in T6, if its children align to disjoint target spans, we can simply arrange them in the order of their corresponding target
Ranking Model Training
Due to this overlapping, it becomes unclear which permutation of “Problem” and “with latter procedure” is a better match of the target phrase; we need a better metric to measure word order similarity between reordered source and target sentences.
Word Reordering as Syntax Tree Node Ranking
Given a source side parse tree T6, the task of word reordering is to transform Te to T4, so that 6’ can match the word order in target language as much as possible.
Word Reordering as Syntax Tree Node Ranking
parse tree, we can obtain the same word order of Japanese translation.
word order is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
He, Wei and Wu, Hua and Wang, Haifeng and Liu, Ting
Discussion
Take the first line of Table 9 as an example, the paraphrased sentence “gob/How many fii/cigarettes Elwx/can fiEfiE/duty-free #7 /take 3Z/NULL” is not a fluent Chinese sentence, however, the rearranged word order is closer to English, which finally results in a much better translation.
Forward-Translation vs. Back-Translation
Two possible reasons may explain this phenomenon: (l) in the first round of translation T 0 9 S1, some target word orders are reserved due to the reordering failure, and these reserved orders lead to a better result in the second round of translation; (2) the text generated by an MT system is more likely to be matched by the reversed but homologous MT system.
Related Work
(2008) use grammars to paraphrase the source side of training data, covering aspects like word order and minor lexical variations (tenses etc.)
word order is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: