Revisiting Pivot Language Approach for Machine Translation
Wu, Hua and Wang, Haifeng

Article Structure

Abstract

This paper revisits the pivot language approach for machine translation.

Introduction

Current statistical machine translation (SMT) systems rely on large parallel and monolingual training corpora to produce translations of relatively higher quality.

Pivot Methods for Phrase-based SMT

2.1 Triangulation Method

Using RBMT Systems for Pivot Translation

Since the source-pivot and pivot-target parallel corpora are independent, the pivot sentences in the two corpora are distinct from each other.

Translation Selection

We propose a method to select the optimal translation from those produced by various translation systems.

Experiments

5.1 Data

Discussion

6.1 Effects of Different RBMT Systems

Conclusion

In this paper, we have compared three different pivot translation methods for spoken language translation.

Topics

translation quality

Appears in 28 sentences as: Translation quality (1) translation quality (31)
In Revisiting Pivot Language Approach for Machine Translation
  1. Experimental results on spoken language translation show that this hybrid method significantly improves the translation quality , which outperforms the method using a source-target corpus of the same size.
    Page 1, “Abstract”
  2. As a result, we can build a synthetic multilingual corpus, which can be used to improve the translation quality .
    Page 2, “Introduction”
  3. The idea of using RBMT systems to improve the translation quality of SMT sysems has been explored in Hu et al.
    Page 2, “Introduction”
  4. Although previous studies proposed several pivot translation methods, there are no studies to combine different pivot methods for translation quality improvement.
    Page 2, “Introduction”
  5. In this paper, we first compare the individual pivot methods and then investigate to improve pivot translation quality by combining the outputs produced by different systems.
    Page 2, “Introduction”
  6. ity; (2) The hybrid method combining SMT and RBMT system for pivot translation greatly improves the translation quality .
    Page 2, “Introduction”
  7. And this translation quality is higher than that of those produced by the system trained with a real Chinese-Spanish corpus; (3) Our sentence-level translation selection method consistently and significantly improves the translation quality over individual translation outputs in all of our experiments.
    Page 2, “Introduction”
  8. The translated test set can be added to the training data to further improve translation quality .
    Page 3, “Using RBMT Systems for Pivot Translation”
  9. Translation quality was evaluated using both the BLEU score proposed by Papineni et al.
    Page 5, “Experiments”
  10. In our experiments, only l-best Chinese or Spanish translation was used since using n-best results did not greatly improve the translation quality .
    Page 5, “Experiments”
  11. From the translation results, it can be seen that three methods achieved comparable translation quality on both ASR and CRR inputs, with the translation results on CRR inputs are much better than those on ASR inputs because of the errors in the ASR inputs.
    Page 6, “Experiments”

See all papers in Proc. ACL 2009 that mention translation quality.

See all papers in Proc. ACL that mention translation quality.

Back to top.

BLEU

Appears in 19 sentences as: BLEU (20)
In Revisiting Pivot Language Approach for Machine Translation
  1. In this paper, we modify the method in Albrecht and Hwa (2007) to only prepare human reference translations for the training examples, and then evaluate the translations produced by the subject systems against the references using BLEU score (Papineni et al., 2002).
    Page 4, “Translation Selection”
  2. We use smoothed sentence-level BLEU score to replace the human assessments, where we use additive smoothing to avoid zero BLEU scores when we calculate the n-gram precisions.
    Page 4, “Translation Selection”
  3. In the context of translation selection, 3/ is assigned as the smoothed BLEU score.
    Page 4, “Translation Selection”
  4. Translation quality was evaluated using both the BLEU score proposed by Papineni et al.
    Page 5, “Experiments”
  5. (2002) and also the modified BLEU (BLEU-Fix) score3 used in the IWSLT 2008 evaluation campaign, where the brevity calculation is modified to use closest reference length instead of shortest reference length.
    Page 5, “Experiments”
  6. Method BLEU BLEU-Fix Triangulation 33 .70/27.46 3 l .5 9/25 .02 Transfer 3352/2834 3136/2620 Synthetic 34.35/27 .21 32.00/26.07 Combination 38.14/29.32 34.76/27.39
    Page 6, “Experiments”
  7. The results also show that our translation selection method is very effective, which achieved absolute improvements of about 4 and l BLEU scores on CRR and ASR inputs, respectively.
    Page 6, “Experiments”
  8. As compared with those in Table 3, the translation quality was greatly improved, with absolute improvements of at least 5.1 and 3.9 BLEU scores on CRR and ASR inputs for system combination results.
    Page 6, “Experiments”
  9. The results show that translating the test set using RBMT systems greatly improved the translation result, with further improvements of about 2 and 1.5 BLEU scores on CRR and ASR inputs, respectively.
    Page 6, “Experiments”
  10. 6The RBMT systems achieved a BLEU score of 24.36 on the test set.
    Page 6, “Experiments”
  11. Method BLEU BLEU-Fix Triangulation 45.64/33.15 42.11/31.11 Transfer 47.18/34.56 43.61/32.17 Combination 4842/3642 4542/3352
    Page 7, “Experiments”

See all papers in Proc. ACL 2009 that mention BLEU.

See all papers in Proc. ACL that mention BLEU.

Back to top.

SMT systems

Appears in 15 sentences as: SMT system (2) SMT Systems (2) SMT systems (11)
In Revisiting Pivot Language Approach for Machine Translation
  1. Then we employ a hybrid method combining RBMT and SMT systems to fill up the data gap for pivot translation, where the source-pivot and pivot-target corpora are independent.
    Page 1, “Abstract”
  2. Unfortunately, large quantities of parallel data are not readily available for some languages pairs, therefore limiting the potential use of current SMT systems .
    Page 1, “Introduction”
  3. Experimental results show that (l) the performances of the three pivot methods are comparable when only SMT systems are used.
    Page 2, “Introduction”
  4. Where L is the number of features used in SMT systems .
    Page 3, “Pivot Methods for Phrase-based SMT”
  5. This can be achieved by translating the pivot sentences in source-pivot corpus to target sentences with the pivot-target SMT system .
    Page 3, “Pivot Methods for Phrase-based SMT”
  6. The other is to obtain source translations for the target sentences in the pivot-target corpus using the pivot-source SMT system .
    Page 3, “Pivot Methods for Phrase-based SMT”
  7. Since it is easy to obtain monolingual corpora than bilingual corpora, we use RBMT systems to translate the available monolingual corpora to obtain synthetic bilingual corpus, which are added to the training data to improve the performance of SMT systems .
    Page 3, “Using RBMT Systems for Pivot Translation”
  8. 5.3 Results by Using SMT Systems
    Page 5, “Experiments”
  9. Table 3: CRR/ASR translation results by using SMT systems
    Page 6, “Experiments”
  10. 5.4 Results by Using both RBMT and SMT Systems
    Page 6, “Experiments”
  11. First, the synthetic Chinese-Spanish corpus can be combined with those produced by the EC and ES SMT systems , which were used in the synthetic method.
    Page 6, “Experiments”

See all papers in Proc. ACL 2009 that mention SMT systems.

See all papers in Proc. ACL that mention SMT systems.

Back to top.

BLEU scores

Appears in 14 sentences as: BLEU score (6) BLEU scores (9)
In Revisiting Pivot Language Approach for Machine Translation
  1. In this paper, we modify the method in Albrecht and Hwa (2007) to only prepare human reference translations for the training examples, and then evaluate the translations produced by the subject systems against the references using BLEU score (Papineni et al., 2002).
    Page 4, “Translation Selection”
  2. We use smoothed sentence-level BLEU score to replace the human assessments, where we use additive smoothing to avoid zero BLEU scores when we calculate the n-gram precisions.
    Page 4, “Translation Selection”
  3. In the context of translation selection, 3/ is assigned as the smoothed BLEU score .
    Page 4, “Translation Selection”
  4. Translation quality was evaluated using both the BLEU score proposed by Papineni et al.
    Page 5, “Experiments”
  5. The results also show that our translation selection method is very effective, which achieved absolute improvements of about 4 and l BLEU scores on CRR and ASR inputs, respectively.
    Page 6, “Experiments”
  6. As compared with those in Table 3, the translation quality was greatly improved, with absolute improvements of at least 5.1 and 3.9 BLEU scores on CRR and ASR inputs for system combination results.
    Page 6, “Experiments”
  7. The results show that translating the test set using RBMT systems greatly improved the translation result, with further improvements of about 2 and 1.5 BLEU scores on CRR and ASR inputs, respectively.
    Page 6, “Experiments”
  8. 6The RBMT systems achieved a BLEU score of 24.36 on the test set.
    Page 6, “Experiments”
  9. Table 6: CRR translation results ( BLEU scores ) by using different RBMT systems
    Page 7, “Discussion”
  10. The BLEU scores are 43.90 and 29.77 for System A and System B, respectively.
    Page 7, “Discussion”
  11. If we compare the results with those only using SMT systems as described in Table 3, the translation quality was greatly improved by at least 3 BLEU scores , even if the translation ac-
    Page 7, “Discussion”

See all papers in Proc. ACL 2009 that mention BLEU scores.

See all papers in Proc. ACL that mention BLEU scores.

Back to top.

translation model

Appears in 13 sentences as: translation model (10) translation models (3)
In Revisiting Pivot Language Approach for Machine Translation
  1. It multiples corresponding translation probabilities and lexical weights in source-pivot and pivot-target translation models to induce a new source-target phrase table.
    Page 1, “Introduction”
  2. For example, we can obtain a source-pivot corpus by translating the pivot sentence in the source-pivot corpus into the target language with pivot-target translation models .
    Page 1, “Introduction”
  3. Following the method described in Wu and Wang (2007), we train the source-pivot and pivot-target translation models using the source-pivot and pivot-target corpora, respectively.
    Page 2, “Pivot Methods for Phrase-based SMT”
  4. Based on these two models, we induce a source-target translation model , in which two important elements need to be induced: phrase translation probability and lexical weight.
    Page 2, “Pivot Methods for Phrase-based SMT”
  5. Then we build a source-target translation model using this corpus.
    Page 3, “Pivot Methods for Phrase-based SMT”
  6. The source-target pairs extracted from this synthetic multilingual corpus can be used to build a source-target translation model .
    Page 3, “Using RBMT Systems for Pivot Translation”
  7. For the synthetic method, we used the ES translation model to translate the English part of the CE corpus to Spanish to construct a synthetic corpus.
    Page 5, “Experiments”
  8. And we also used the BTEC CEl corpus to build a EC translation model to translate the English part of ES corpus into Chinese.
    Page 5, “Experiments”
  9. Then we combined these two synthetic corpora to build a Chinese-Spanish translation model .
    Page 5, “Experiments”
  10. Second, the synthetic Chinese-English corpus can be added into the BTEC CEl corpus to build the CE translation model .
    Page 6, “Experiments”
  11. In this way, the intersected English phrases in the CE corpus and ES corpus becomes more, which enables the Chinese-Spanish translation model induced using the triangulation method to cover more phrase pairs.
    Page 6, “Experiments”

See all papers in Proc. ACL 2009 that mention translation model.

See all papers in Proc. ACL that mention translation model.

Back to top.

translation systems

Appears in 8 sentences as: translation system (3) translation systems (5)
In Revisiting Pivot Language Approach for Machine Translation
  1. For translations from one of the systems, this method uses the outputs from other translation systems as pseudo references.
    Page 2, “Introduction”
  2. The advantage of our method is that we do not need to manually label the translations produced by each translation system , therefore enabling our method suitable for translation selection among any systems without additional manual work.
    Page 2, “Introduction”
  3. Given a source sentence 3, we can translate it into n pivot sentences 191,192, ..., pn using a source-pivot translation system .
    Page 3, “Pivot Methods for Phrase-based SMT”
  4. We propose a method to select the optimal translation from those produced by various translation systems .
    Page 4, “Translation Selection”
  5. For each translation, this method uses the outputs from other translation systems as pseudo references.
    Page 4, “Translation Selection”
  6. can easily retrain the learner under different conditions, therefore enabling our method to be applied to sentence-level translation selection from any sets of translation systems without any additional human work.
    Page 4, “Translation Selection”
  7. To select translation among outputs produced by different pivot translation systems , we used SVM-light (Joachins, 1999) to perform support vector regression with the linear kernel.
    Page 5, “Experiments”
  8. We also extracted the Chinese-Spanish (CS) corpus to build a standard CS translation system , which is denoted as Standard.
    Page 8, “Discussion”

See all papers in Proc. ACL 2009 that mention translation systems.

See all papers in Proc. ACL that mention translation systems.

Back to top.

translation probability

Appears in 5 sentences as: translation probabilities (1) Translation Probability (1) translation probability (4)
In Revisiting Pivot Language Approach for Machine Translation
  1. It multiples corresponding translation probabilities and lexical weights in source-pivot and pivot-target translation models to induce a new source-target phrase table.
    Page 1, “Introduction”
  2. Based on these two models, we induce a source-target translation model, in which two important elements need to be induced: phrase translation probability and lexical weight.
    Page 2, “Pivot Methods for Phrase-based SMT”
  3. Phrase Translation Probability We induce the phrase translation probability by assuming the independence between the source and target phrases when given the pivot phrase.
    Page 2, “Pivot Methods for Phrase-based SMT”
  4. (2003), there are two important elements in the lexical weight: word alignment information a in a phrase pair (5, f) and lexical translation probability w (s
    Page 2, “Pivot Methods for Phrase-based SMT”
  5. Then we estimate the lexical translation probability as shown in Eq.
    Page 3, “Pivot Methods for Phrase-based SMT”

See all papers in Proc. ACL 2009 that mention translation probability.

See all papers in Proc. ACL that mention translation probability.

Back to top.

phrase table

Appears in 4 sentences as: phrase table (2) phrase tables (2)
In Revisiting Pivot Language Approach for Machine Translation
  1. The first is based on phrase table multiplication (Cohn and Lapata 2007; Wu and Wang, 2007).
    Page 1, “Introduction”
  2. It multiples corresponding translation probabilities and lexical weights in source-pivot and pivot-target translation models to induce a new source-target phrase table .
    Page 1, “Introduction”
  3. For each system, we used three different alignment heuristics (grow, grow-diag, grow-diag-final4) to obtain the final alignment results, and then constructed three different phrase tables .
    Page 5, “Experiments”
  4. In order to further analyze the translation results, we evaluated the above systems by examining the coverage of the phrase tables over the test phrases.
    Page 6, “Experiments”

See all papers in Proc. ACL 2009 that mention phrase table.

See all papers in Proc. ACL that mention phrase table.

Back to top.

significantly improves

Appears in 4 sentences as: significant improvement (1) significantly improve (1) significantly improves (2)
In Revisiting Pivot Language Approach for Machine Translation
  1. Experimental results on spoken language translation show that this hybrid method significantly improves the translation quality, which outperforms the method using a source-target corpus of the same size.
    Page 1, “Abstract”
  2. Experimental results indicate that our method achieves consistent and significant improvement over individual translation outputs.
    Page 1, “Abstract”
  3. And this translation quality is higher than that of those produced by the system trained with a real Chinese-Spanish corpus; (3) Our sentence-level translation selection method consistently and significantly improves the translation quality over individual translation outputs in all of our experiments.
    Page 2, “Introduction”
  4. Experimental results indicate that this method can consistently and significantly improve translation quality over individual translation outputs.
    Page 8, “Conclusion”

See all papers in Proc. ACL 2009 that mention significantly improves.

See all papers in Proc. ACL that mention significantly improves.

Back to top.

sentence-level

Appears in 4 sentences as: sentence-level (4)
In Revisiting Pivot Language Approach for Machine Translation
  1. And this translation quality is higher than that of those produced by the system trained with a real Chinese-Spanish corpus; (3) Our sentence-level translation selection method consistently and significantly improves the translation quality over individual translation outputs in all of our experiments.
    Page 2, “Introduction”
  2. We regard sentence-level translation selection as a machine translation (MT) evaluation problem and formalize this problem with a regression learning model.
    Page 4, “Translation Selection”
  3. We use smoothed sentence-level BLEU score to replace the human assessments, where we use additive smoothing to avoid zero BLEU scores when we calculate the n-gram precisions.
    Page 4, “Translation Selection”
  4. can easily retrain the learner under different conditions, therefore enabling our method to be applied to sentence-level translation selection from any sets of translation systems without any additional human work.
    Page 4, “Translation Selection”

See all papers in Proc. ACL 2009 that mention sentence-level.

See all papers in Proc. ACL that mention sentence-level.

Back to top.

sentence pairs

Appears in 4 sentences as: sentence pairs (4)
In Revisiting Pivot Language Approach for Machine Translation
  1. Another way to use the synthetic multilingual corpus is to add the source-pivot or pivot-target sentence pairs in this corpus to the training data to rebuild the source-pivot or pivot-target SMT model.
    Page 3, “Using RBMT Systems for Pivot Translation”
  2. For English-Spanish translation, we selected 400k sentence pairs from the Europarl corpus that are close to the English parts of both the BTEC CE corpus and the BTEC ES corpus.
    Page 5, “Experiments”
  3. 9We used about 70k sentence pairs for CE model training, while Wang et a1.
    Page 8, “Conclusion”
  4. (2008) used about 100k sentence pairs , a CE translation dictionary and more monolingual corpora for model training.
    Page 8, “Conclusion”

See all papers in Proc. ACL 2009 that mention sentence pairs.

See all papers in Proc. ACL that mention sentence pairs.

Back to top.

phrase pairs

Appears in 4 sentences as: phrase pair (1) phrase pairs (4)
In Revisiting Pivot Language Approach for Machine Translation
  1. (2003), there are two important elements in the lexical weight: word alignment information a in a phrase pair (5, f) and lexical translation probability w (s
    Page 2, “Pivot Methods for Phrase-based SMT”
  2. Let a1 and a2 represent the word alignment information inside the phrase pairs (5,13) and (13, 2?)
    Page 2, “Pivot Methods for Phrase-based SMT”
  3. It can also change the distribution of some phrase pairs and reinforce some phrase pairs relative to the test set.
    Page 4, “Using RBMT Systems for Pivot Translation”
  4. In this way, the intersected English phrases in the CE corpus and ES corpus becomes more, which enables the Chinese-Spanish translation model induced using the triangulation method to cover more phrase pairs .
    Page 6, “Experiments”

See all papers in Proc. ACL 2009 that mention phrase pairs.

See all papers in Proc. ACL that mention phrase pairs.

Back to top.

model training

Appears in 4 sentences as: model trained (1) model training (3)
In Revisiting Pivot Language Approach for Machine Translation
  1. Table 2 describes the data used for model training in this paper, including the BTEC (Basic Travel Expression Corpus) Chinese-English (CE) corpus and the BTEC English-Spanish (ES) corpus provided by IWSLT 2008 organizers, the HIT olympic CE corpus (2004-863-008)1 and the Europarl ES corpusz.
    Page 4, “Experiments”
  2. Here, we used the synthetic CE Olympic corpus to train a model, which was interpolated with the CE model trained with both the BTEC CE1 corpus and the synthetic BTEC corpus to obtain an interpolated CE translation model.
    Page 7, “Experiments”
  3. 9We used about 70k sentence pairs for CE model training , while Wang et a1.
    Page 8, “Conclusion”
  4. (2008) used about 100k sentence pairs, a CE translation dictionary and more monolingual corpora for model training .
    Page 8, “Conclusion”

See all papers in Proc. ACL 2009 that mention model training.

See all papers in Proc. ACL that mention model training.

Back to top.

machine translation

Appears in 4 sentences as: machine translation (4)
In Revisiting Pivot Language Approach for Machine Translation
  1. This paper revisits the pivot language approach for machine translation .
    Page 1, “Abstract”
  2. Current statistical machine translation (SMT) systems rely on large parallel and monolingual training corpora to produce translations of relatively higher quality.
    Page 1, “Introduction”
  3. In order to fill up this data gap, we make use of rule-based machine translation (RBMT) systems to translate the pivot sentences in the source-pivot or pivot-target
    Page 1, “Introduction”
  4. We regard sentence-level translation selection as a machine translation (MT) evaluation problem and formalize this problem with a regression learning model.
    Page 4, “Translation Selection”

See all papers in Proc. ACL 2009 that mention machine translation.

See all papers in Proc. ACL that mention machine translation.

Back to top.

development set

Appears in 4 sentences as: development set (4)
In Revisiting Pivot Language Approach for Machine Translation
  1. For Chinese-English-Spanish translation, we used the development set (devset3) released for the pivot task as the test set, which contains 506 source sentences, with 7 reference translations in English and Spanish.
    Page 5, “Experiments”
  2. To be capable of tuning parameters on our systems, we created a development set of 1,000 sentences taken from the training sets, with 3 reference translations in both English and Spanish.
    Page 5, “Experiments”
  3. This development set is also used to train the regression learning model.
    Page 5, “Experiments”
  4. We ran the decoder with its default settings and then used Moses’ implementation of minimum error rate training (Och, 2003) to tune the feature weights on the development set .
    Page 5, “Experiments”

See all papers in Proc. ACL 2009 that mention development set.

See all papers in Proc. ACL that mention development set.

Back to top.

Chinese-English

Appears in 4 sentences as: Chinese-English (4)
In Revisiting Pivot Language Approach for Machine Translation
  1. Table 2 describes the data used for model training in this paper, including the BTEC (Basic Travel Expression Corpus) Chinese-English (CE) corpus and the BTEC English-Spanish (ES) corpus provided by IWSLT 2008 organizers, the HIT olympic CE corpus (2004-863-008)1 and the Europarl ES corpusz.
    Page 4, “Experiments”
  2. For Chinese-English translation, we mainly used BTEC CE1 corpus.
    Page 5, “Experiments”
  3. We used two commercial RBMT systems in our experiments: System A for Chinese-English bidirectional translation and System B for English-Chinese and English-Spanish translation.
    Page 5, “Experiments”
  4. Second, the synthetic Chinese-English corpus can be added into the BTEC CEl corpus to build the CE translation model.
    Page 6, “Experiments”

See all papers in Proc. ACL 2009 that mention Chinese-English.

See all papers in Proc. ACL that mention Chinese-English.

Back to top.

n-gram

Appears in 3 sentences as: n-gram (3)
In Revisiting Pivot Language Approach for Machine Translation
  1. We use smoothed sentence-level BLEU score to replace the human assessments, where we use additive smoothing to avoid zero BLEU scores when we calculate the n-gram precisions.
    Page 4, “Translation Selection”
  2. 1-4 n-gram precisions against pseudo references (l g n g 4)
    Page 4, “Translation Selection”
  3. 15-19 n-gram precision against a target corpus (l g n g 5)
    Page 4, “Translation Selection”

See all papers in Proc. ACL 2009 that mention n-gram.

See all papers in Proc. ACL that mention n-gram.

Back to top.

language pairs

Appears in 3 sentences as: language pairs (2) languages pairs (1)
In Revisiting Pivot Language Approach for Machine Translation
  1. Unfortunately, large quantities of parallel data are not readily available for some languages pairs , therefore limiting the potential use of current SMT systems.
    Page 1, “Introduction”
  2. It is especially difficult to obtain such a domain-specific corpus for some language pairs such as Chinese to Spanish translation.
    Page 1, “Introduction”
  3. For many source-target language pairs , the commercial pivot-source and/or pivot-target RBMT systems are available on markets.
    Page 3, “Using RBMT Systems for Pivot Translation”

See all papers in Proc. ACL 2009 that mention language pairs.

See all papers in Proc. ACL that mention language pairs.

Back to top.

feature vector

Appears in 3 sentences as: feature vector (3)
In Revisiting Pivot Language Approach for Machine Translation
  1. A regression learning method is used to infer a function that maps a feature vector (which measures the similarity of a translation to the pseudo references) to a score that indicates the quality of the translation.
    Page 2, “Introduction”
  2. The regression objective is to infer a function that maps a feature vector (which measures the similarity of a translation from one system to the pseudo references) to a score that indicates the quality of the translation.
    Page 4, “Translation Selection”
  3. The input sentence is represented as a feature vector X, which are extracted from the input sentence and the comparisons against the pseudo references.
    Page 4, “Translation Selection”

See all papers in Proc. ACL 2009 that mention feature vector.

See all papers in Proc. ACL that mention feature vector.

Back to top.

word alignment

Appears in 3 sentences as: word alignment (3)
In Revisiting Pivot Language Approach for Machine Translation
  1. (2003), there are two important elements in the lexical weight: word alignment information a in a phrase pair (5, f) and lexical translation probability w (s
    Page 2, “Pivot Methods for Phrase-based SMT”
  2. Let a1 and a2 represent the word alignment information inside the phrase pairs (5,13) and (13, 2?)
    Page 2, “Pivot Methods for Phrase-based SMT”
  3. Based on the the induced word alignment information, we estimate the co-occurring frequencies of word pairs directly from the induced phrase
    Page 2, “Pivot Methods for Phrase-based SMT”

See all papers in Proc. ACL 2009 that mention word alignment.

See all papers in Proc. ACL that mention word alignment.

Back to top.