A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
Li, Junhui and Marton, Yuval and Resnik, Philip and Daumé III, Hal

Article Structure

Abstract

This paper explores a simple and effective unified framework for incorporating soft linguistic reordering constraints into a hierarchical phrase-based translation system: l) a syntactic reordering model that explores reorderings for context free grammar rules; and 2) a semantic reordering model that focuses on the reordering of predicate-argument structures.

Introduction

Reordering models in statistical machine translation (SMT) model the word order difference when translating from one language to another.

HPB Translation Model: an Overview

In HPB models (Chiang, 2007), synchronous rules take the form X —> (7, 04, N), where X is the nonterminal symbol, 7 and 04 are strings of lexical items and non-terminals in the source and target side, respectively, and N indicates the one-to-one correspondence between non-terminals in y and a.

Unified Linguistic Reordering Models

As mentioned earlier, the linguistic reordering unit is the syntactic constituent for syntactic reordering, and the semantic role for semantic reordering.

Experiments

We have presented our unified approach to incorporating syntactic and semantic soft reordering constraints in an HPB system.

Discussion

The trend of the results, summarized as performance gain over the baseline and MR08 systems averaged over all test sets, is presented in Table 6.

Related Work

Syntax-based reordering: Some previous work pre-ordered words in the source sentences, so that the word order of source and target sentences is similar.

Conclusion and Future Work

In this paper, we have presented a unified reordering framework to incorporate soft linguistic constraints (of syntactic or semantic nature) into the HPB translation model.

Topics

semantic roles

Appears in 23 sentences as: semantic role (10) semantic roles (15)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. Rather than introducing reordering models on either the word level or the translation phrase level, we propose a unified approach to modeling reordering on the linguistic unit level, e. g., syntactic constituents and semantic roles .
    Page 1, “Introduction”
  2. The reordering unit falls into multiple granularities, from single words to more complex constituents and semantic roles , and often crosses translation phrases.
    Page 1, “Introduction”
  3. To show the effectiveness of our reordering models, we integrate both syntactic constituent reordering models and semantic role reordering models into a state-of-the-art HPB system (Chiang, 2007; Dyer et al., 2010).
    Page 1, “Introduction”
  4. To this end, we employ the same reordering framework as syntactic constituent reordering and focus on semantic roles in a PAS.
    Page 2, “Introduction”
  5. We introduce novel soft reordering constraints, using syntactic constituents or semantic roles , composed over word alignment information in translation rules used during decoding time;
    Page 2, “Introduction”
  6. Section 4 gives our experimental results and Section 5 discusses the behavior difference between syntactic constituent reordering and semantic role reordering.
    Page 2, “Introduction”
  7. As mentioned earlier, the linguistic reordering unit is the syntactic constituent for syntactic reordering, and the semantic role for semantic reordering.
    Page 2, “Unified Linguistic Reordering Models”
  8. Note that we refer all core arguments, adjuncts, and predicates as semantic roles ; thus we say the PAS in Figure 1 has 4 roles.
    Page 2, “Unified Linguistic Reordering Models”
  9. Treating the two forms of reorderings in a unified way, the semantic reordering model is obtainable by regarding a PAS as a CFG rule and considering a semantic role as a constituent.
    Page 2, “Unified Linguistic Reordering Models”
  10. Although the semantic reordering model is structured in precisely the same way, we use different feature sets to predict the reordering between two semantic roles .
    Page 4, “Unified Linguistic Reordering Models”
  11. To get the two semantic reordering model feature values, we simply use Algorithm 1 and its associated functions from .731 to .735 replacing a CFG rule cfg with a PAS pas, and a constituent XPi with a semantic role Ri.
    Page 5, “Unified Linguistic Reordering Models”

See all papers in Proc. ACL 2014 that mention semantic roles.

See all papers in Proc. ACL that mention semantic roles.

Back to top.

translation model

Appears in 10 sentences as: translation model (8) translation models (2)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. The popular distortion or lexicalized reordering models in phrase-based SMT make good local predictions by focusing on reordering on word level, while the synchronous context free grammars in hierarchical phrase-based (HPB) translation models are capable of handling nonlocal reordering on the translation phrase level.
    Page 1, “Introduction”
  2. The general ideas, however, are applicable to other translation models , e.g., phrase-based model, as well.
    Page 1, “Introduction”
  3. Section 2 provides an overview of HPB translation model .
    Page 2, “Introduction”
  4. Each such rule is associated with a set of translation model features {gbi}, such as phrase translation probability p (04 | 7) and its inverse p (7 | 04), the lexical translation probability plew (04 | 7) and its inverse plew (7 | 04), and a rule penalty that affects preference for longer or shorter derivations.
    Page 2, “HPB Translation Model: an Overview”
  5. For models with syntactic reordering, we add two new features (i.e., one for the leftmost reordering model and the other for the rightmost reordering model) into the log-linear translation model in Eq.
    Page 4, “Unified Linguistic Reordering Models”
  6. For the semantic reordering models, we also add two new features into the log-linear translation model .
    Page 5, “Unified Linguistic Reordering Models”
  7. Our basic baseline system employs 19 basic features: a language model feature, 7 translation model features, word penalty, unknown word penalty, the glue rule, date, number and 6 pass-through features.
    Page 5, “Experiments”
  8. Both are close to our work; however, our model generates reordering features that are integrated into the log-linear translation model during decoding.
    Page 8, “Related Work”
  9. In the soft constraint or reordering model approach, Liu and Gildea (2010) modeled the reordering/deletion of source-side semantic roles in a tree-to-string translation model .
    Page 9, “Related Work”
  10. In this paper, we have presented a unified reordering framework to incorporate soft linguistic constraints (of syntactic or semantic nature) into the HPB translation model .
    Page 9, “Conclusion and Future Work”

See all papers in Proc. ACL 2014 that mention translation model.

See all papers in Proc. ACL that mention translation model.

Back to top.

BLEU

Appears in 8 sentences as: BLEU (9)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. We use NIST MT 06 dataset (1664 sentence pairs) for tuning, and NIST MT 03, 05, and 08 datasets (919, 1082, and 1357 sentence pairs, respectively) for evaluation.1 We use BLEU (Pap-ineni et al., 2002) for both tuning and evaluation.
    Page 5, “Experiments”
  2. Our first group of experiments investigates whether the syntactic reordering models are able to improve translation quality in terms of BLEU .
    Page 6, “Experiments”
  3. Table 5: System performance in BLEU scores.
    Page 7, “Experiments”
  4. o The semantic reordering models also achieve significant gain of 0.8 BLEU on average over the baseline system, demonstrating the effectiveness of PAS-based reordering.
    Page 7, “Experiments”
  5. However, the gain diminishes to 0.3 BLEU on the MR08 system.
    Page 7, “Experiments”
  6. The two models collectively achieve a gain of up to 1.4 BLEU over the baseline and 1.0 BLEU over MR08 on average, which is shown in the rows of “+syn+sem” in Table 5.
    Page 7, “Experiments”
  7. Table 6: Performance gain in BLEU over baseline and MR08 systems averaged over all test sets.
    Page 7, “Discussion”
  8. Table 9: Performance ( BLEU score) comparison between non-oracle and oracle experiments.
    Page 8, “Discussion”

See all papers in Proc. ACL 2014 that mention BLEU.

See all papers in Proc. ACL that mention BLEU.

Back to top.

maximum entropy

Appears in 7 sentences as: maximum entropy (7)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. In order to predict either the leftmost or rightmost reordering type for two adjacent constituents, we use a maximum entropy classifier to estimate the probability of the reordering type rt 6 {M, DM, 8, DS} as follows:
    Page 3, “Unified Linguistic Reordering Models”
  2. For each pair of constituents, it first extracts its leftmost and rightmost reordering types (line 6) and then gets their respective probabilities returned by the maximum entropy classifiers defined in Section 3.1
    Page 4, “Unified Linguistic Reordering Models”
  3. To validate this conjecture on our translation test data, we compare the reordering performance among the MR08 system, the improved systems and the maximum entropy classifiers.
    Page 7, “Discussion”
  4. Then we evaluate the automatic reordering outputs generated from both our translation systems and maximum entropy classifiers.
    Page 7, “Discussion”
  5. Potential improvement analysis: Table 7 also shows that our current maximum entropy classifiers have room for improvement, especially for semantic reordering.
    Page 8, “Discussion”
  6. We clearly see that using gold syntactic reordering types significantly improves the performance (e.g., 34.9 vs. 33.4 on average) and there is still some room for improvement by building a better maximum entropy classifiers (e.g., 34.9 vs. 34.3).
    Page 8, “Discussion”
  7. Ge (2010) presented a syntax-driven maximum entropy reordering model that predicted the source word translation order.
    Page 9, “Related Work”

See all papers in Proc. ACL 2014 that mention maximum entropy.

See all papers in Proc. ACL that mention maximum entropy.

Back to top.

parse tree

Appears in 7 sentences as: Parse tree (1) parse tree (3) parse trees (3)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. According to the annotation principles in (Chinese) PropB ank (Palmer et al., 2005; Xue and Palmer, 2009), all the roles in a PAS map to a corresponding constituent in the parse tree , and these constituents (e.g., NPs and VBD in Figure 1) do not overlap with each other.
    Page 2, “Unified Linguistic Reordering Models”
  2. parse tree and its word alignment links to the target language.
    Page 3, “Unified Linguistic Reordering Models”
  3. Given a hypothesis H with its alignment a, it traverses all CFG rules in the parse tree and sees if two adjacent constituents are conditioned to trigger the reordering models (lines 2-4).
    Page 4, “Unified Linguistic Reordering Models”
  4. Input: Sentence f in the source language Parse tree t of f All CFG rules {cfg} in t Hypothesis H spanning from word on to 1112 Alignment a of H Output: Log-Probabilities of the syntactic leftmost and rightmost reordering models 1. set l_pr0b = rp'rob = 0.0 2. foreach cfg in {cfg} 3. foreach pair XPZ- and XPi+1 in cfg 4. if .731 (on, 1112, XP,) =false or .731 (on, 1112, XP¢+1)=false or $2 (H, Cfg, XPi, XP1+1) =false continue (l-typev r-type) = $3 (H7 0’7 XPi+1) l_pr0b +=10g .7-"4 (Ltype, cfg, XPZ- , XPi+1) r_pr0b += 10g 7-} (ntype, cfg, XPZ- , XPi+1) 9. return (Lprob, nprob)
    Page 5, “Unified Linguistic Reordering Models”
  5. To obtain syntactic parse trees and semantic roles on the tuning and test datasets, we first parse the source sentences with the Berkeley Parser (Petrov and Klein, 2007), trained on the Chinese Treebank 7.0 (Xue et al., 2005).
    Page 5, “Experiments”
  6. In order to understand how well the MR08 system respects their reordering preference, we use the gold alignment dataset LDC2006E86, in which the source sentences are from the Chinese Treebank, and thus both the gold parse trees and gold predicate-argument structures are available.
    Page 6, “Experiments”
  7. (2012) obtained word order by using a reranking approach to reposition nodes in syntactic parse trees .
    Page 8, “Related Work”

See all papers in Proc. ACL 2014 that mention parse tree.

See all papers in Proc. ACL that mention parse tree.

Back to top.

soft constraints

Appears in 7 sentences as: soft constraint (3) soft constraints (4)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. We develop novel features based on both models and use them as soft constraints to guide the translation process.
    Page 1, “Abstract”
  2. Our stronger baseline employs, in addition, the fine-grained syntactic soft constraint features of Marton and Resnik (2008), hereafter MR08.
    Page 5, “Experiments”
  3. The syntactic soft constraint features include both MR08 exact-matching and cross-boundary constraints (denoted XP= and XP+).
    Page 5, “Experiments”
  4. This suggests that our syntactic reordering features interact well with the MR08 syntactic soft constraints : the XP+ and XP= features focus on a single constituent each, while our reordering features focus on a pair of constituents each.
    Page 6, “Experiments”
  5. Another approach in previous work added soft constraints as weighted features in the SMT decoder to reward good reorderings and penalize bad ones.
    Page 8, “Related Work”
  6. akis and Sima’an, 2011), the rules are sparser than SCFG with nameless non-terminals (i.e., Xs) and soft constraints .
    Page 9, “Related Work”
  7. In the soft constraint or reordering model approach, Liu and Gildea (2010) modeled the reordering/deletion of source-side semantic roles in a tree-to-string translation model.
    Page 9, “Related Work”

See all papers in Proc. ACL 2014 that mention soft constraints.

See all papers in Proc. ACL that mention soft constraints.

Back to top.

phrase-based

Appears in 6 sentences as: phrase-based (7)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. This paper explores a simple and effective unified framework for incorporating soft linguistic reordering constraints into a hierarchical phrase-based translation system: l) a syntactic reordering model that explores reorderings for context free grammar rules; and 2) a semantic reordering model that focuses on the reordering of predicate-argument structures.
    Page 1, “Abstract”
  2. Experiments on Chinese-English translation show that the reordering approach can significantly improve a state-of-the-art hierarchical phrase-based translation system.
    Page 1, “Abstract”
  3. The popular distortion or lexicalized reordering models in phrase-based SMT make good local predictions by focusing on reordering on word level, while the synchronous context free grammars in hierarchical phrase-based (HPB) translation models are capable of handling nonlocal reordering on the translation phrase level.
    Page 1, “Introduction”
  4. The general ideas, however, are applicable to other translation models, e.g., phrase-based model, as well.
    Page 1, “Introduction”
  5. Last, we also note that recent work on non-syntax-based reorderings in (flat) phrase-based models (Cherry, 2013; Feng et al., 2013) can also be potentially adopted to hpb models.
    Page 9, “Related Work”
  6. Experiments on Chinese-English translation show that the reordering approach can significantly improve a state-of-the-art hierarchical phrase-based translation system.
    Page 9, “Conclusion and Future Work”

See all papers in Proc. ACL 2014 that mention phrase-based.

See all papers in Proc. ACL that mention phrase-based.

Back to top.

word alignment

Appears in 5 sentences as: word alignment (4) word alignment: (1)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. Our syntactic constituent reordering model considers context free grammar (CFG) rules in the source language and predicts the reordering of their elements on the target side, using word alignment information.
    Page 1, “Introduction”
  2. We introduce novel soft reordering constraints, using syntactic constituents or semantic roles, composed over word alignment information in translation rules used during decoding time;
    Page 2, “Introduction”
  3. parse tree and its word alignment links to the target language.
    Page 3, “Unified Linguistic Reordering Models”
  4. Unlike the conventional phrase and lexical translation features, whose values are phrase pair-determined and thus can be calculated offline, the value of the reordering features can only be obtained during decoding time, and requires word alignment information as well.
    Page 4, “Unified Linguistic Reordering Models”
  5. Before we present the algorithm integrating the reordering models, we define the following functions by assuming XP,- and XP,-+1 are the constituent pair of interest in CFG rule cfg, H is the translation hypothesis and a is its word alignment:
    Page 4, “Unified Linguistic Reordering Models”

See all papers in Proc. ACL 2014 that mention word alignment.

See all papers in Proc. ACL that mention word alignment.

Back to top.

model’s training

Appears in 5 sentences as: Model Training (1) model training (1) models trained (1) model’s training (2)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. 4.2 Model Training
    Page 5, “Experiments”
  2. However, our preliminary experiments showed that the reordering models trained on gold alignment yielded higher improvement.
    Page 5, “Experiments”
  3. Table 3: Reordering type distribution over the reordering model’s training data.
    Page 6, “Experiments”
  4. A deeper examination of the reordering model’s training data reveals that some constituent pairs and semantic role pairs have a preference for a specific reordering type (monotone or swap).
    Page 6, “Experiments”
  5. Reordering accuracy analysis: The reordering type distribution on the reordering model training data in Table 3 suggests that semantic reordering is more difficult than syntactic reordering.
    Page 7, “Discussion”

See all papers in Proc. ACL 2014 that mention model’s training.

See all papers in Proc. ACL that mention model’s training.

Back to top.

translation system

Appears in 4 sentences as: translation system (3) translation systems (1)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. This paper explores a simple and effective unified framework for incorporating soft linguistic reordering constraints into a hierarchical phrase-based translation system : l) a syntactic reordering model that explores reorderings for context free grammar rules; and 2) a semantic reordering model that focuses on the reordering of predicate-argument structures.
    Page 1, “Abstract”
  2. Experiments on Chinese-English translation show that the reordering approach can significantly improve a state-of-the-art hierarchical phrase-based translation system .
    Page 1, “Abstract”
  3. Then we evaluate the automatic reordering outputs generated from both our translation systems and maximum entropy classifiers.
    Page 7, “Discussion”
  4. Experiments on Chinese-English translation show that the reordering approach can significantly improve a state-of-the-art hierarchical phrase-based translation system .
    Page 9, “Conclusion and Future Work”

See all papers in Proc. ACL 2014 that mention translation system.

See all papers in Proc. ACL that mention translation system.

Back to top.

MaXEnt

Appears in 4 sentences as: MaXEnt (1) MaxEnt (1) Maxent (1) maxent (1)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. tactic parsing and semantic role labeling on the Chinese sentences, then train the models by using MaxEnt toolkit with L1 regularizer (Tsuruoka et al., 2009).3 Table 3 shows the reordering type distribution over the training data.
    Page 6, “Experiments”
  2. It shows that 1) as expected, our classifiers do worse on the harder semantic reordering prediction than syntactic reordering prediction; 2) thanks to the high accuracy obtained by the maxent classifiers, integrating either the syntactic or the semantic reordering constraints results in better reordering performance from both syntactic and semantic perspectives; 3) in terms of the mutual impact, the syntactic reordering models help improving semantic reordering more than the semantic reordering
    Page 7, “Discussion”
  3. Syntactic Semantic l-m rm l-m rm MR08 75.0 78.0 66.3 68.5 +syn-reorder 78.4 80.9 69.0 70.2 +sem—reorder 76.0 78.8 70.7 72.7 +b0th 78.6 81.7 70.6 72.1 Maxent Classifier 80.7 85.6 70.9 73.5
    Page 8, “Discussion”
  4. Marton and Resnik (2008) employed soft syntactic constraints with weighted binary features and no MaXEnt model.
    Page 8, “Related Work”

See all papers in Proc. ACL 2014 that mention MaXEnt.

See all papers in Proc. ACL that mention MaXEnt.

Back to top.

syntactic parse

Appears in 4 sentences as: syntactic parse (2) syntactic parses (2)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. To obtain syntactic parse trees and semantic roles on the tuning and test datasets, we first parse the source sentences with the Berkeley Parser (Petrov and Klein, 2007), trained on the Chinese Treebank 7.0 (Xue et al., 2005).
    Page 5, “Experiments”
  2. Since the syntactic parses of the tuning and test data contain 29 types of constituent labels and 35 types of POS tags, we have 29 types of XP+ features and 64 types of XP= features.
    Page 5, “Experiments”
  3. The reordering rules were either manually designed (Collins et al., 2005; Wang et al., 2007; Xu et al., 2009; Lee et al., 2010) or automatically learned (Xia and McCord, 2004; Gen-zel, 2010; Visweswariah et al., 2010; Khalilov and Sima’an, 2011; Lerner and Petrov, 2013), using syntactic parses .
    Page 8, “Related Work”
  4. (2012) obtained word order by using a reranking approach to reposition nodes in syntactic parse trees.
    Page 8, “Related Work”

See all papers in Proc. ACL 2014 that mention syntactic parse.

See all papers in Proc. ACL that mention syntactic parse.

Back to top.

role labeling

Appears in 3 sentences as: role labeler (1) role labeling (2)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. We then pass the parses to a Chinese semantic role labeler (Li et al., 2010), trained on the Chinese PropBank 3.0 (Xue and Palmer, 2009), to annotate semantic roles for all verbal predicates (part-of-speech tag VV, VE, or VC).
    Page 5, “Experiments”
  2. tactic parsing and semantic role labeling on the Chinese sentences, then train the models by using MaxEnt toolkit with L1 regularizer (Tsuruoka et al., 2009).3 Table 3 shows the reordering type distribution over the training data.
    Page 6, “Experiments”
  3. Finally in the postprocessing approach category, Wu and Fung (2009) performed semantic role labeling on translation output and reordered arguments to maximize the cross-lingual match of the semantic frames between the source sentence and the target translation.
    Page 9, “Related Work”

See all papers in Proc. ACL 2014 that mention role labeling.

See all papers in Proc. ACL that mention role labeling.

Back to top.

word order

Appears in 3 sentences as: word order (3)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. Reordering models in statistical machine translation (SMT) model the word order difference when translating from one language to another.
    Page 1, “Introduction”
  2. Syntax-based reordering: Some previous work pre-ordered words in the source sentences, so that the word order of source and target sentences is similar.
    Page 8, “Related Work”
  3. (2012) obtained word order by using a reranking approach to reposition nodes in syntactic parse trees.
    Page 8, “Related Work”

See all papers in Proc. ACL 2014 that mention word order.

See all papers in Proc. ACL that mention word order.

Back to top.

significantly improve

Appears in 3 sentences as: significantly improve (2) significantly improves (1)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. Experiments on Chinese-English translation show that the reordering approach can significantly improve a state-of-the-art hierarchical phrase-based translation system.
    Page 1, “Abstract”
  2. We clearly see that using gold syntactic reordering types significantly improves the performance (e.g., 34.9 vs. 33.4 on average) and there is still some room for improvement by building a better maximum entropy classifiers (e.g., 34.9 vs. 34.3).
    Page 8, “Discussion”
  3. Experiments on Chinese-English translation show that the reordering approach can significantly improve a state-of-the-art hierarchical phrase-based translation system.
    Page 9, “Conclusion and Future Work”

See all papers in Proc. ACL 2014 that mention significantly improve.

See all papers in Proc. ACL that mention significantly improve.

Back to top.

semantic role labeling

Appears in 3 sentences as: semantic role labeler (1) semantic role labeling (2)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. We then pass the parses to a Chinese semantic role labeler (Li et al., 2010), trained on the Chinese PropBank 3.0 (Xue and Palmer, 2009), to annotate semantic roles for all verbal predicates (part-of-speech tag VV, VE, or VC).
    Page 5, “Experiments”
  2. tactic parsing and semantic role labeling on the Chinese sentences, then train the models by using MaxEnt toolkit with L1 regularizer (Tsuruoka et al., 2009).3 Table 3 shows the reordering type distribution over the training data.
    Page 6, “Experiments”
  3. Finally in the postprocessing approach category, Wu and Fung (2009) performed semantic role labeling on translation output and reordered arguments to maximize the cross-lingual match of the semantic frames between the source sentence and the target translation.
    Page 9, “Related Work”

See all papers in Proc. ACL 2014 that mention semantic role labeling.

See all papers in Proc. ACL that mention semantic role labeling.

Back to top.

log-linear

Appears in 3 sentences as: log-linear (3)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. For models with syntactic reordering, we add two new features (i.e., one for the leftmost reordering model and the other for the rightmost reordering model) into the log-linear translation model in Eq.
    Page 4, “Unified Linguistic Reordering Models”
  2. For the semantic reordering models, we also add two new features into the log-linear translation model.
    Page 5, “Unified Linguistic Reordering Models”
  3. Both are close to our work; however, our model generates reordering features that are integrated into the log-linear translation model during decoding.
    Page 8, “Related Work”

See all papers in Proc. ACL 2014 that mention log-linear.

See all papers in Proc. ACL that mention log-linear.

Back to top.

feature weights

Appears in 3 sentences as: Feature weight (1) feature weights (3)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. Table 8: Reordering feature weights .
    Page 8, “Discussion”
  2. Feature weight analysis: Table 8 shows the syntactic and semantic reordering feature weights .
    Page 8, “Discussion”
  3. It shows that the semantic feature weights decrease in the presence of the syntactic features, indicating that the decoder learns to trust semantic features less in the presence of the more accurate syntactic features.
    Page 8, “Discussion”

See all papers in Proc. ACL 2014 that mention feature weights.

See all papers in Proc. ACL that mention feature weights.

Back to top.

Chinese-English

Appears in 3 sentences as: Chinese-English (3)
In A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation
  1. Experiments on Chinese-English translation show that the reordering approach can significantly improve a state-of-the-art hierarchical phrase-based translation system.
    Page 1, “Abstract”
  2. In this section, we test its effectiveness in Chinese-English translation.
    Page 5, “Experiments”
  3. Experiments on Chinese-English translation show that the reordering approach can significantly improve a state-of-the-art hierarchical phrase-based translation system.
    Page 9, “Conclusion and Future Work”

See all papers in Proc. ACL 2014 that mention Chinese-English.

See all papers in Proc. ACL that mention Chinese-English.

Back to top.