Index of papers in Proc. ACL 2013 that mention
  • word order
Goto, Isao and Utiyama, Masao and Sumita, Eiichiro and Tamura, Akihiro and Kurohashi, Sadao
Abstract
It enables our model to learn the effect of relative word order among NP candidates as well as to learn the effect of distances from the training data.
Distortion Model for Phrase-Based SMT
One of the reasons for this difference is the relative word order between words.
Distortion Model for Phrase-Based SMT
Thus, considering relative word order is important.
Distortion Model for Phrase-Based SMT
In (d) and (e) in Figure 2, the word (kare) at the CP and the word order between katta and karita are the same.
Introduction
Estimating appropriate word order in a target language is one of the most difficult problems for statistical machine translation (SMT).
Introduction
This is particularly true when translating between languages with widely different word orders .
Introduction
It enables our model to learn the effect of relative word order among NP candidates as well as to learn the effect of distances from the training data.
word order is mentioned in 22 sentences in this paper.
Topics mentioned in this paper:
Zarriess, Sina and Kuhn, Jonas
Conclusion
We have presented a data-driven approach for investigating generation architectures that address discourse-level reference and sentence-level syntax and word order .
Experiments
The error propagation effects that we find in the first and second pipeline architecture clearly show that decisions at the levels of syntax, reference and word order interact, otherwise their predic-
Experiments
Table 4 shows the performance of the REG module on varying input layers, providing a more detailed analysis of the interaction between RE, syntax and word order .
Experiments
These results strengthen the evidence from the previous experiment that decisions at the level of syntax, reference and word order are interleaved.
Generation Systems
REG is carried out prior to surface realization such that the RE component does not have access to surface syntax or word order whereas the SYN component has access to fully specified RE slots.
Generation Systems
In this case, REG has access to surface syntax without word order but the surface realization is trained and applied on trees with underspecified RE slots.
Introduction
Our main goal is to investigate how different architectural setups account for interactions between generation decisions at the level of referring expressions (REs), syntax and word order .
Related Work
(ZarrieB et al., 2012) have recently argued that the good performance of these linguistically motivated word order models, which exploit morpho-syntactic features of noun phrases (i.e.
word order is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Visweswariah, Karthik and Khapra, Mitesh M. and Ramanathan, Ananthakrishnan
Abstract
Preordering of a source language sentence to match target word order has proved to be useful for improving machine translation systems.
Introduction
Dealing with word order differences between source and target languages presents a significant challenge for machine translation systems.
Introduction
Recently, approaches that address the problem of word order differences between the source and target language without requiring a high quality source or target parser have been proposed (DeNero and Uszkoreit, 2011; Visweswariah et al., 2011; Neubig et al., 2012).
Related work
Dealing with the problem of handling word order differences in machine translation has recently received much attention.
Reordering issues in Urdu-English translation
In this section we describe the main sources of word order differences between Urdu and English since this is the language pair we experiment with in this paper.
Reordering issues in Urdu-English translation
The typical word order in Urdu is Subject-Object-Verb unlike English in which the order is Subject-Verb-Object.
word order is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Braslavski, Pavel and Beloborodov, Alexander and Khalilov, Maxim and Sharoff, Serge
Conclusions and future plans
We will also address the problem of tailoring automatic evaluation measures to Russian — accounting for complex morphology and free word order .
Conclusions and future plans
While the campaign was based exclusively on data in one language direction, the correlation results for automatic MT quality measures should be applicable to other languages with free word order and complex morphology.
Introduction
One of the main challenges in developing MT systems for Russian and for evaluating them is the need to deal with its free word order and complex morphology.
Results
While TER and GTM are known to provide better correlation with post-editing efforts for English (O’Brien, 2011), free word order and greater data sparseness on the sentence level makes TER much less reliable for Russian.
word order is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Eidelman, Vladimir and Marton, Yuval and Resnik, Philip
Discussion
The categories were: function word drop, content word drop, syntactic error (with a reasonable meaning), semantic error (regardless of syntax), word order issues, and function word mistranslation and “hallucination”.
Discussion
ticeably had more word order and excess/wrong function word issues in the basic feature setting than any optimizer.
Discussion
However, RM seemed to benefit the most from the sparse features, as its bad word order rate dropped close to MIRA, and its ex-cess/wrong function word rate dropped below that of MIRA with sparse features (MIRA’s rate actually doubled from its basic feature set).
word order is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kozhevnikov, Mikhail and Titov, Ivan
Model Transfer
Word order information constitutes an implicit group that is always available.
Related Work
This makes it hard to account for phenomena that are expressed differently in the languages considered, for example the syntactic function of a certain word may be indicated by a preposition, inflection or word order , depending on the language.
Results
may be partly attributed to the fact that the mapping is derived from the same corpus as the evaluation data — Europarl (Koehn, 2005) — and partly by the similarity between English and French in terms of word order , usage of articles and prepositions.
word order is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Sulger, Sebastian and Butt, Miriam and King, Tracy Holloway and Meurer, Paul and Laczkó, Tibor and Rákosi, György and Dione, Cheikh Bamba and Dyvik, Helge and Rosén, Victoria and De Smedt, Koenraad and Patejuk, Agnieszka and Cetinoglu, Ozlem and Arka, I Wayan and Mistica, Meladel
Discussion and Future Work
The representations offer information about dependency relations as well as word order , constituency and part-of-speech.
ParGram and its Feature Space
In contrast, c-structures encode language particular differences in linear word order , surface morphological vs. syntactic structures, and constituency (Dalrymple, 2001).
ParGram and its Feature Space
The left/upper c- and f-structures show the parse from the English ParGram grammar, the right/lower ones from Urdu ParGram grammar.4’5 The c-structures encode linear word order and constituency and thus look very different; e.g., the English structure is rather hierarchical while the Urdu structure is flat (Urdu is a free word-order language with no evidence for a VP; Butt (1995)).
word order is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: