A Ranking-based Approach to Word Reordering for Statistical Machine Translation
Yang, Nan and Li, Mu and Zhang, Dongdong and Yu, Nenghai

Article Structure

Abstract

Long distance word reordering is a major challenge in statistical machine translation research.

Introduction

Modeling word reordering between source and target sentences has been a research focus since the emerging of statistical machine translation.

Word Reordering as Syntax Tree Node Ranking

Given a source side parse tree T6, the task of word reordering is to transform Te to T4, so that 6’ can match the word order in target language as much as possible.

Ranking Model Training

To learn ranking function in Equation (1), we need to determine the feature set 0 and learn weight vector w from reorder examples.

Integration into SMT system

There are two ways to integrate the ranking reordering model into a phrase-based SMT system: the pre-reorder method, and the decoding time constraint method.

Experiments

To test our ranking reorder model, we carry out eXperiments on large scale English-To-Japanese, and J apanese-To-English translation tasks.

Discussion on Related Work

There have been several studies focusing on compiling handcrafted syntactic reorder rules.

Conclusion and Future Work

In this paper we present a ranking based reordering method to reorder source language to match the word order of target language given the source side parse tree.

Topics

parse tree

Appears in 12 sentences as: parse tree (8) parse trees (4)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. In this work, we further extend this line of exploration and propose a novel but simple approach, which utilizes a ranking model based on word order precedence in the target language to reposition nodes in the syntactic parse tree of a source sentence.
    Page 1, “Abstract”
  2. The most notable solution to this problem is adopting syntaX-based SMT models, especially methods making use of source side syntactic parse trees .
    Page 1, “Introduction”
  3. One is tree-to-string model (Quirk et al., 2005; Liu et al., 2006) which directly uses source parse trees to derive a large set of translation rules and associated model parameters.
    Page 1, “Introduction”
  4. The other is called syntax pre-reordering — an approach that re-positions source words to approximate target language word order as much as possible based on the features from source syntactic parse trees .
    Page 1, “Introduction”
  5. In this paper, we continue this line of work and address the problem of word reordering based on source syntactic parse trees for SMT.
    Page 1, “Introduction”
  6. Given a source side parse tree T6, the task of word reordering is to transform Te to T4, so that 6’ can match the word order in target language as much as possible.
    Page 2, “Word Reordering as Syntax Tree Node Ranking”
  7. By permuting tree nodes in the parse tree , the source sentence is reordered into the target language order.
    Page 2, “Word Reordering as Syntax Tree Node Ranking”
  8. parse tree , we can obtain the same word order of Japanese translation.
    Page 2, “Word Reordering as Syntax Tree Node Ranking”
  9. Following this principle, the word reordering task can be broken into subtasks, in which we only need to determine the order of children nodes for all non-leaf nodes in the source parse tree .
    Page 2, “Word Reordering as Syntax Tree Node Ranking”
  10. None means the original sentences without reordering; Oracle means the best permutation allowed by the source parse tree ; ManR refers to manual reorder rules; Rank means ranking reordering model.
    Page 7, “Experiments”
  11. On the other hand, the performance of the ranking reorder model still fall far short of oracle, which is the lowest crossing-link number of all possible permutations allowed by the parse tree .
    Page 7, “Experiments”

See all papers in Proc. ACL 2012 that mention parse tree.

See all papers in Proc. ACL that mention parse tree.

Back to top.

word order

Appears in 12 sentences as: word order (12)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. Previous work has shown using source syntactic trees is an effective way to tackle this problem between two languages with substantial word order difference.
    Page 1, “Abstract”
  2. In this work, we further extend this line of exploration and propose a novel but simple approach, which utilizes a ranking model based on word order precedence in the target language to reposition nodes in the syntactic parse tree of a source sentence.
    Page 1, “Abstract”
  3. Long-distance word reordering between language pairs with substantial word order difference, such as Japanese with Subject-Object—Verb (SOV) structure and English with Subject-Verb-Object (SVO) structure, is generally viewed beyond the scope of the phrase-based systems discussed above, because of either distortion limits or lack of discriminative features for modeling.
    Page 1, “Introduction”
  4. The other is called syntax pre-reordering — an approach that re-positions source words to approximate target language word order as much as possible based on the features from source syntactic parse trees.
    Page 1, “Introduction”
  5. the word order in target language.
    Page 2, “Introduction”
  6. Given a source side parse tree T6, the task of word reordering is to transform Te to T4, so that 6’ can match the word order in target language as much as possible.
    Page 2, “Word Reordering as Syntax Tree Node Ranking”
  7. parse tree, we can obtain the same word order of Japanese translation.
    Page 2, “Word Reordering as Syntax Tree Node Ranking”
  8. For a sentence pair (6, f, a) with syntax tree Te on the source side, we need to determine which reordered tree Té, best represents the word order in target sentence f. For a tree node 75 in T6, if its children align to disjoint target spans, we can simply arrange them in the order of their corresponding target
    Page 3, “Ranking Model Training”
  9. Due to this overlapping, it becomes unclear which permutation of “Problem” and “with latter procedure” is a better match of the target phrase; we need a better metric to measure word order similarity between reordered source and target sentences.
    Page 3, “Ranking Model Training”
  10. Integrated method takes the original source sentence 6 as input, and ranking model generates a reordered e’ as a word order ref-
    Page 5, “Integration into SMT system”
  11. Essentially they are soft constraints to encourage the decoder to choose translations with word order similar to the prediction of ranking reorder model.
    Page 5, “Integration into SMT system”

See all papers in Proc. ACL 2012 that mention word order.

See all papers in Proc. ACL that mention word order.

Back to top.

phrase-based

Appears in 12 sentences as: phrase-based (12)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. We evaluated our approach on large-scale J apanese-English and English-Japanese machine translation tasks, and show that it can significantly outperform the baseline phrase-based SMT system.
    Page 1, “Abstract”
  2. In phrase-based models (Och, 2002; Koehn et al., 2003), phrase is introduced to serve as the fundamental translation element and deal with local reordering, while a distance based distortion model is used to coarsely depict the exponentially decayed word movement probabilities in language translation.
    Page 1, “Introduction”
  3. Long-distance word reordering between language pairs with substantial word order difference, such as Japanese with Subject-Object—Verb (SOV) structure and English with Subject-Verb-Object (SVO) structure, is generally viewed beyond the scope of the phrase-based systems discussed above, because of either distortion limits or lack of discriminative features for modeling.
    Page 1, “Introduction”
  4. This is usually done in a preprocessing step, and then followed by a standard phrase-based SMT system that takes the reordered source sentence as input to finish the translation.
    Page 1, “Introduction”
  5. The ranking model can not only be used in a pre-reordering based SMT system, but also be integrated into a phrase-based decoder serving as additional distortion features.
    Page 2, “Introduction”
  6. We evaluated our approach on large-scale J apanese-English and English-Japanese machine translation tasks, and experimental results show that our approach can bring significant improvements to the baseline phrase-based SMT system in both pre-ordering and integrated decoding settings.
    Page 2, “Introduction”
  7. In the rest of the paper, we will first formally present our ranking-based word reordering model, then followed by detailed steps of modeling training and integration into a phrase-based SMT system.
    Page 2, “Introduction”
  8. There are two ways to integrate the ranking reordering model into a phrase-based SMT system: the pre-reorder method, and the decoding time constraint method.
    Page 5, “Integration into SMT system”
  9. Reordered sentences can go through the normal pipeline of a phrase-based decoder.
    Page 5, “Integration into SMT system”
  10. The above three penalties are added as additional features into the log-linear model of the phrase-based system.
    Page 5, “Integration into SMT system”
  11. We use a BTG phrase-based system with a Max-Ent based leXicalized reordering model (Wu, 1997; Xiong et al., 2006) as our baseline system for
    Page 6, “Experiments”

See all papers in Proc. ACL 2012 that mention phrase-based.

See all papers in Proc. ACL that mention phrase-based.

Back to top.

sentence pair

Appears in 10 sentences as: sentence pair (5) sentence pairs (5)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. Figure 1: An English-to-Japanese sentence pair .
    Page 2, “Word Reordering as Syntax Tree Node Ranking”
  2. For a sentence pair (6, f, a) with syntax tree Te on the source side, we need to determine which reordered tree Té, best represents the word order in target sentence f. For a tree node 75 in T6, if its children align to disjoint target spans, we can simply arrange them in the order of their corresponding target
    Page 3, “Ranking Model Training”
  3. Figure 2: Fragment of a sentence pair .
    Page 3, “Ranking Model Training”
  4. Figure 2 shows a fragment of one sentence pair in our training data.
    Page 3, “Ranking Model Training”
  5. (b) in Figure 2 shows the automatically generated alignment for the sentence pair fragment.
    Page 3, “Ranking Model Training”
  6. They are manually translated into the other language to produce 7,000 sentence pairs , which are split into two parts: 2,000 pairs as development set (dev) and the other 5,000 pairs as test set (web test).
    Page 6, “Experiments”
  7. After removing duplicates, we have about 18 million sentence pairs , which contain about 270 millions of English tokens and 320 millions of Japanese tokens.
    Page 6, “Experiments”
  8. As we do not have access to a golden reordered sentence set, we decide to use the alignment crossing-link numbers between aligned sentence pairs as the measure for reorder performance.
    Page 7, “Experiments”
  9. We sample a small corpus (575 sentence pairs ) and do manual alignment (man-small).
    Page 7, “Experiments”
  10. our ranking reordering model indeed significantly reduces the crossing-link numbers over the original sentence pairs .
    Page 7, “Experiments”

See all papers in Proc. ACL 2012 that mention sentence pair.

See all papers in Proc. ACL that mention sentence pair.

Back to top.

dependency trees

Appears in 9 sentences as: Dependency Tree (1) dependency tree (3) dependency trees (5)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. We use children to denote direct descendants of tree nodes for constituent trees; while for dependency trees , children of a node include not only all direct dependents, but also the head word itself.
    Page 2, “Word Reordering as Syntax Tree Node Ranking”
  2. Constituent tree is shown above the source sentence; arrows below the source sentences show head-dependent arcs for dependency tree ; word alignment links are lines without arrow between the source and target sentences.
    Page 2, “Word Reordering as Syntax Tree Node Ranking”
  3. For example, consider the node rooted at trying in the dependency tree in Figure 1.
    Page 3, “Word Reordering as Syntax Tree Node Ranking”
  4. 2In our experiments, there are nodes with more than 10 children for English dependency trees .
    Page 4, “Ranking Model Training”
  5. For English-to-Japanese task, we extract features from Stanford English Dependency Tree (Marneffe et al., 2006), including lexicons, Part-of-Speech tags, dependency labels, punc-tuations and tree distance between head and dependent.
    Page 4, “Ranking Model Training”
  6. For J apanese-to-English task, we use a chunk-based Japanese dependency tree (Kudo and Matsumoto, 2002).
    Page 4, “Ranking Model Training”
  7. For English, we train a dependency parser as (Nivre and Scholz, 2004) on WSJ portion of Penn Tree-bank, which are converted to dependency trees using Stanford Parser (Marneffe et al., 2006).
    Page 6, “Experiments”
  8. (2010) also mentioned their method yielded no improvement when applied to dependency trees in their initial experiments.
    Page 8, “Discussion on Related Work”
  9. Genzel (2010) dealt with the data sparseness problem by using window heuristic, and learned reordering pattern sequence from dependency trees .
    Page 8, “Discussion on Related Work”

See all papers in Proc. ACL 2012 that mention dependency trees.

See all papers in Proc. ACL that mention dependency trees.

Back to top.

parallel corpus

Appears in 8 sentences as: Parallel Corpus (1) parallel corpus (7)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. In this section, we first describe how to extract reordering examples from parallel corpus ; then we show our features for ranking function; finally, we discuss how to train the model from the extracted examples.
    Page 3, “Ranking Model Training”
  2. 5.1.2 Parallel Corpus
    Page 6, “Experiments”
  3. Our parallel corpus is crawled from the web, containing news articles, technical documents, blog entries etc.
    Page 6, “Experiments”
  4. We use Giza++ (Och and Ney, 2003) to generate the word alignment for the parallel corpus .
    Page 6, “Experiments”
  5. The distortion model is trained on the same parallel corpus as the phrase table using a home implemented maximum entropy trainer.
    Page 6, “Experiments”
  6. Ranking reordering model is learned from the same parallel corpus as phrase table.
    Page 6, “Experiments”
  7. We train the ranking model on 25% of our parallel corpus , and use the rest 75% as test data (auto).
    Page 7, “Experiments”
  8. is RankingSVM accuracy in percentage on the training data; CLN is the crossing-link number per sentence on parallel corpus with automatically generated word alignment; BLEU is the BLEU score in percentage on web test set on Rank-IT setting (system with integrated rank reordering model); leacn means 11 most frequent lexicons in the training corpus.
    Page 7, “Experiments”

See all papers in Proc. ACL 2012 that mention parallel corpus.

See all papers in Proc. ACL that mention parallel corpus.

Back to top.

SMT system

Appears in 8 sentences as: SMT system (7) SMT systems (1)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. We evaluated our approach on large-scale J apanese-English and English-Japanese machine translation tasks, and show that it can significantly outperform the baseline phrase-based SMT system .
    Page 1, “Abstract”
  2. This is usually done in a preprocessing step, and then followed by a standard phrase-based SMT system that takes the reordered source sentence as input to finish the translation.
    Page 1, “Introduction”
  3. The ranking model can not only be used in a pre-reordering based SMT system , but also be integrated into a phrase-based decoder serving as additional distortion features.
    Page 2, “Introduction”
  4. We evaluated our approach on large-scale J apanese-English and English-Japanese machine translation tasks, and experimental results show that our approach can bring significant improvements to the baseline phrase-based SMT system in both pre-ordering and integrated decoding settings.
    Page 2, “Introduction”
  5. In the rest of the paper, we will first formally present our ranking-based word reordering model, then followed by detailed steps of modeling training and integration into a phrase-based SMT system .
    Page 2, “Introduction”
  6. There are two ways to integrate the ranking reordering model into a phrase-based SMT system : the pre-reorder method, and the decoding time constraint method.
    Page 5, “Integration into SMT system”
  7. Lexicon features generally continue to improve the RankingSVM accuracy and reduce CLN on training data, but they do not bring further improvement for SMT systems beyond the top 100 most frequent words.
    Page 7, “Experiments”
  8. We also expect to explore better way to integrate ranking reorder model into SMT system instead of a simple penalty scheme.
    Page 8, “Conclusion and Future Work”

See all papers in Proc. ACL 2012 that mention SMT system.

See all papers in Proc. ACL that mention SMT system.

Back to top.

word alignment

Appears in 8 sentences as: word aligned (2) word aligner (1) word alignment (5)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. The ranking model is automatically derived from word aligned parallel data with a syntactic parser for source language based on both lexical and syntactical features.
    Page 1, “Abstract”
  2. The ranking model is automatically derived from the word aligned parallel data, viewing the source tree nodes to be reordered as list items to be ranked.
    Page 2, “Introduction”
  3. Constituent tree is shown above the source sentence; arrows below the source sentences show head-dependent arcs for dependency tree; word alignment links are lines without arrow between the source and target sentences.
    Page 2, “Word Reordering as Syntax Tree Node Ranking”
  4. As pointed out by (Li et al., 2007), in practice, nodes often have overlapping target spans due to erroneous word alignment or different syntactic structures between source and target sentences.
    Page 3, “Ranking Model Training”
  5. We use Giza++ (Och and Ney, 2003) to generate the word alignment for the parallel corpus.
    Page 6, “Experiments”
  6. By manual analysis, we find that the gap is due to both errors of the ranking reorder model and errors from word alignment and parser.
    Page 7, “Experiments”
  7. The reason is that our annotators tend to align function words which might be left unaligned by automatic word aligner .
    Page 7, “Experiments”
  8. is RankingSVM accuracy in percentage on the training data; CLN is the crossing-link number per sentence on parallel corpus with automatically generated word alignment ; BLEU is the BLEU score in percentage on web test set on Rank-IT setting (system with integrated rank reordering model); leacn means 11 most frequent lexicons in the training corpus.
    Page 7, “Experiments”

See all papers in Proc. ACL 2012 that mention word alignment.

See all papers in Proc. ACL that mention word alignment.

Back to top.

BLEU

Appears in 7 sentences as: BLEU (8)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. Table 2: BLEU (%) score on dev and test data for both EJ and J-E experiment.
    Page 6, “Experiments”
  2. We compare their influence on RankingSVM accuracy, alignment crossing-link number, end-to-end BLEU score, and the model size.
    Page 7, “Experiments”
  3. CLN BLEU Feat.# tag+label 88.6 16.4 22.24 26k +dst 91.5 13.5 22.66 55k E_J +pct 92.2 13.1 22.73 79k +lezv100 92.9 12.1 22.85 347k +l€$1000 94.0 11.5 22.79 2,410k +l€$2000 95.2 10.7 22.81 3,794k tag+fw 85.0 18.6 25.43 31k +dst 90.3 16.9 25.62 65k J_E +lezv100 91.6 15.7 25.87 293k +l€$1000 92.4 14.8 25.91 2,156k +le$2000 93.0 14.3 25.84 3,297k
    Page 7, “Experiments”
  4. is RankingSVM accuracy in percentage on the training data; CLN is the crossing-link number per sentence on parallel corpus with automatically generated word alignment; BLEU is the BLEU score in percentage on web test set on Rank-IT setting (system with integrated rank reordering model); leacn means 11 most frequent lexicons in the training corpus.
    Page 7, “Experiments”
  5. These features also correspond to BLEU score improvement for End-to-End evaluations.
    Page 7, “Experiments”
  6. From Table 2 we can see that pre-reorder method has higher BLEU score on news test, while integrated model performs better on web test set which contains informal texts.
    Page 7, “Experiments”
  7. Large scale experiment shows improvement on both reordering metric and SMT performance, with up to 1.73 point BLEU gain in our evaluation test.
    Page 8, “Conclusion and Future Work”

See all papers in Proc. ACL 2012 that mention BLEU.

See all papers in Proc. ACL that mention BLEU.

Back to top.

syntactic parse

Appears in 5 sentences as: syntactic parse (4) syntactic parser (1)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. In this work, we further extend this line of exploration and propose a novel but simple approach, which utilizes a ranking model based on word order precedence in the target language to reposition nodes in the syntactic parse tree of a source sentence.
    Page 1, “Abstract”
  2. The ranking model is automatically derived from word aligned parallel data with a syntactic parser for source language based on both lexical and syntactical features.
    Page 1, “Abstract”
  3. The most notable solution to this problem is adopting syntaX-based SMT models, especially methods making use of source side syntactic parse trees.
    Page 1, “Introduction”
  4. The other is called syntax pre-reordering — an approach that re-positions source words to approximate target language word order as much as possible based on the features from source syntactic parse trees.
    Page 1, “Introduction”
  5. In this paper, we continue this line of work and address the problem of word reordering based on source syntactic parse trees for SMT.
    Page 1, “Introduction”

See all papers in Proc. ACL 2012 that mention syntactic parse.

See all papers in Proc. ACL that mention syntactic parse.

Back to top.

machine translation

Appears in 4 sentences as: machine translation (4)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. Long distance word reordering is a major challenge in statistical machine translation research.
    Page 1, “Abstract”
  2. We evaluated our approach on large-scale J apanese-English and English-Japanese machine translation tasks, and show that it can significantly outperform the baseline phrase-based SMT system.
    Page 1, “Abstract”
  3. Modeling word reordering between source and target sentences has been a research focus since the emerging of statistical machine translation .
    Page 1, “Introduction”
  4. We evaluated our approach on large-scale J apanese-English and English-Japanese machine translation tasks, and experimental results show that our approach can bring significant improvements to the baseline phrase-based SMT system in both pre-ordering and integrated decoding settings.
    Page 2, “Introduction”

See all papers in Proc. ACL 2012 that mention machine translation.

See all papers in Proc. ACL that mention machine translation.

Back to top.

BLEU score

Appears in 4 sentences as: BLEU score (4)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. We compare their influence on RankingSVM accuracy, alignment crossing-link number, end-to-end BLEU score , and the model size.
    Page 7, “Experiments”
  2. is RankingSVM accuracy in percentage on the training data; CLN is the crossing-link number per sentence on parallel corpus with automatically generated word alignment; BLEU is the BLEU score in percentage on web test set on Rank-IT setting (system with integrated rank reordering model); leacn means 11 most frequent lexicons in the training corpus.
    Page 7, “Experiments”
  3. These features also correspond to BLEU score improvement for End-to-End evaluations.
    Page 7, “Experiments”
  4. From Table 2 we can see that pre-reorder method has higher BLEU score on news test, while integrated model performs better on web test set which contains informal texts.
    Page 7, “Experiments”

See all papers in Proc. ACL 2012 that mention BLEU score.

See all papers in Proc. ACL that mention BLEU score.

Back to top.

End-to-End

Appears in 3 sentences as: End-to-End (2) end-to-end (1)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. 5.4 End-to-End Result
    Page 6, “Experiments”
  2. We compare their influence on RankingSVM accuracy, alignment crossing-link number, end-to-end BLEU score, and the model size.
    Page 7, “Experiments”
  3. These features also correspond to BLEU score improvement for End-to-End evaluations.
    Page 7, “Experiments”

See all papers in Proc. ACL 2012 that mention End-to-End.

See all papers in Proc. ACL that mention End-to-End.

Back to top.

feature templates

Appears in 3 sentences as: Feature templates (1) feature templates (2)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. The detailed feature templates are shown in Table l.
    Page 4, “Ranking Model Training”
  2. Table 1: Feature templates for ranking function.
    Page 5, “Ranking Model Training”
  3. We need to redesign our ranking feature templates to encode the reordering information in the source part of the translation rules.
    Page 8, “Discussion on Related Work”

See all papers in Proc. ACL 2012 that mention feature templates.

See all papers in Proc. ACL that mention feature templates.

Back to top.

baseline system

Appears in 3 sentences as: Baseline System (1) baseline system (2)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. 5.3.1 Baseline System
    Page 6, “Experiments”
  2. We use a BTG phrase-based system with a Max-Ent based leXicalized reordering model (Wu, 1997; Xiong et al., 2006) as our baseline system for
    Page 6, “Experiments”
  3. From Table 2, we can see our ranking reordering model significantly improves the performance for both English-to-Japanese and Japanese-to-English experiments over the BTG baseline system .
    Page 6, “Experiments”

See all papers in Proc. ACL 2012 that mention baseline system.

See all papers in Proc. ACL that mention baseline system.

Back to top.

significant improvements

Appears in 3 sentences as: significant improvements (1) significantly improve (1) significantly improves (1)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. We evaluated our approach on large-scale J apanese-English and English-Japanese machine translation tasks, and experimental results show that our approach can bring significant improvements to the baseline phrase-based SMT system in both pre-ordering and integrated decoding settings.
    Page 2, “Introduction”
  2. All settings significantly improve over the baseline at 95% confidence level.
    Page 6, “Experiments”
  3. From Table 2, we can see our ranking reordering model significantly improves the performance for both English-to-Japanese and Japanese-to-English experiments over the BTG baseline system.
    Page 6, “Experiments”

See all papers in Proc. ACL 2012 that mention significant improvements.

See all papers in Proc. ACL that mention significant improvements.

Back to top.

translation tasks

Appears in 3 sentences as: translation tasks (3)
In A Ranking-based Approach to Word Reordering for Statistical Machine Translation
  1. We evaluated our approach on large-scale J apanese-English and English-Japanese machine translation tasks , and show that it can significantly outperform the baseline phrase-based SMT system.
    Page 1, “Abstract”
  2. We evaluated our approach on large-scale J apanese-English and English-Japanese machine translation tasks , and experimental results show that our approach can bring significant improvements to the baseline phrase-based SMT system in both pre-ordering and integrated decoding settings.
    Page 2, “Introduction”
  3. To test our ranking reorder model, we carry out eXperiments on large scale English-To-Japanese, and J apanese-To-English translation tasks .
    Page 5, “Experiments”

See all papers in Proc. ACL 2012 that mention translation tasks.

See all papers in Proc. ACL that mention translation tasks.

Back to top.