Index of papers in Proc. ACL 2014 that mention
  • word order
Paperno, Denis and Pham, Nghia The and Baroni, Marco
Compositional distributional semantics
For instance, symmetric operations like vector addition are insensitive to syntactic structure, therefore meaning differences encoded in word order
Evaluation
(2013) at the TFDS workshop (tfds below) was specifically designed to test compositional methods for their sensitivity to word order and the semantic effect of determiners.
Evaluation
The foils have high lexical overlap with the targets but very different meanings, due to different determin-ers and/or word order .
Evaluation
In the tfds task, not surprisingly the add and mult models, lacking determiner representations and being order-insensitive, fail to distinguish between true paraphrases and foils (indeed, for the mult model foils are significantly closer to the targets than the paraphrases, probably because the latter have lower content word overlap than the foils, that often differ in word order and determin-ers only).
word order is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Cai, Jingsheng and Utiyama, Masao and Sumita, Eiichiro and Zhang, Yujie
Abstract
In statistical machine translation (SMT), syntax-based pre-ordering of the source language is an effective method for dealing with language pairs where there are great differences in their respective word orders .
Dependency-based Pre-ordering Rule Set
If the reordering produced a Chinese phrase that had a closer word order to that of the English one, this structure would be a candidate pre-ordering rule.
Dependency-based Pre-ordering Rule Set
In this example, with the application of an nsubj : rcmod rule, the phrase can be translated into “a senior official close to Sharon say”, which has a word order very close to English.
Experiments
A bilingual speaker of Chinese and English looked at an original Chinese phrase and the pre-ordered one with their corresponding English phrase and judged whether the pre-ordering obtained a Chinese phrase that had a closer word order to the English one.
Introduction
The reason for this is that there are great differences in their word orders .
Introduction
Then, syntactic reordering rules are applied to these parse trees with the goal of reordering the source language sentences into the word order of the target language.
word order is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Kalchbrenner, Nal and Grefenstette, Edward and Blunsom, Phil
Properties of the Sentence Model
As regards the other neural sentence models, the class of NBoW models is by definition insensitive to word order .
Properties of the Sentence Model
A sentence model based on a recurrent neural network is sensitive to word order , but it has a bias towards the latest words that it takes as input (Mikolov et al., 2011).
Properties of the Sentence Model
Similarly, a recursive neural network is sensitive to word order but has a bias towards the topmost nodes in the tree; shallower trees mitigate this effect to some extent (Socher et al., 2013a).
word order is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Lin, Chen and Miller, Timothy and Kho, Alvin and Bethard, Steven and Dligach, Dmitriy and Pradhan, Sameer and Savova, Guergana
Methods
However, another view of the DPK is possible by thinking of it as cheaply calculating rule production similarity by taking advantage of relatively strict English word ordering .
Methods
This means, for example, that if the rule production NP —> NN J J DT were ever found in a tree, to DPK it would be indistinguishable from the common production NP —> DT JJ NN, despite having inverted word order , and thus would have a maximal similarity score.
Methods
SST and PTK would assign this pair a much lower score for having completely different ordering, but we suggest that cases such as these are very rare due to the relatively strict word ordering of English.
word order is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Duan, Manjuan and White, Michael
Background
This model improved upon the state-of-the-art in terms of automatic evaluation scores on held-out test data, but nevertheless an error analysis revealed a surprising number of word order , function word and inflection errors.
Background
To improve word ordering decisions, White & Rajkumar (2012) demonstrated that incorporating a feature into the ranker inspired by Gibson’s (2000) dependency locality theory can deliver statistically significant improvements in automatic evaluation scores, better match the distributional characteristics of sentence orderings, and significantly reduce the number of serious ordering errors (some involving vicious ambiguities) as confirmed by a targeted human evaluation.
Simple Reranking
Using dependencies allowed us to measure parse accuracy independently of word order .
word order is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Li, Junhui and Marton, Yuval and Resnik, Philip and Daumé III, Hal
Introduction
Reordering models in statistical machine translation (SMT) model the word order difference when translating from one language to another.
Related Work
Syntax-based reordering: Some previous work pre-ordered words in the source sentences, so that the word order of source and target sentences is similar.
Related Work
(2012) obtained word order by using a reranking approach to reposition nodes in syntactic parse trees.
word order is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: