Index of papers in Proc. ACL 2009 that mention
  • dependency tree
He, Wei and Wang, Haifeng and Guo, Yuqing and Liu, Ting
Introduction
(2009) present a dependency-spanning tree algorithm for word ordering, which first builds dependency trees to decide linear precedence between heads and modifiers then uses an n-gram language model to order siblings.
Introduction
two techniques: the first is dividing the entire dependency tree into one-depth sub-trees and solving linearization in sub-trees; the second is the determination of relative positions between dependents and heads according to dependency relations.
Introduction
In Section 2, we describe the idea of dividing the realization procedure for an entire dependency tree into a series of sub-procedures for sub-trees.
Log-linear Models
A conditional log-linear model for the probability of a realization r given the dependency tree t, has the general parametric form
Log-linear Models
And Y(t) gives the set of all possible realizations of the dependency tree t.
Sentence Realization from Dependency Structure
In our dependency tree representations, dependency relations are represented as arcs pointing from a head to a dependent.
Sentence Realization from Dependency Structure
Figure 1 gives an example of dependency tree representation for the sentence:
Sentence Realization from Dependency Structure
Our sentence realizer takes such an unordered dependency tree as input, determines the linear order of the words
dependency tree is mentioned in 15 sentences in this paper.
Topics mentioned in this paper:
Martins, Andre and Smith, Noah and Xing, Eric
Dependency Parsing
A dependency tree is a lightweight syntactic representation that attempts to capture functional relationships between words.
Dependency Parsing
We define the set of legal dependency parse trees of at (denoted 34:10)) as the set of O-arborescences of D, i.e., we admit each arborescence as a potential dependency tree .
Dependency Parsing
Let y 6 32(30) be a legal dependency tree for at; if the arc a = (i,j> E y, we refer to i as the parent of j (denoted i = 7t(j)) and j as a child of i.
Dependency Parsing as an ILP
A subgraph y = (V, B) is a legal dependency tree (i.e., y E y(m)) if and only if the following conditions are met:
Dependency Parsing as an ILP
Furthermore, the integer points of Z are precisely the incidence vectors of dependency trees in 32(30); these are obtained by replacing Eq.
dependency tree is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Nivre, Joakim
Introduction
dependency trees , as illustrated in Figure 1.
Introduction
In a proj ective dependency tree , the yield of every subtree is a contiguous substring of the sentence.
Introduction
But allowing non-projective dependency trees also makes parsing empirically harder, because it requires that we model relations between nonadjacent structures over potentially unbounded distances, which often has a negative impact on parsing accuracy.
Transitions for Dependency Parsing
The system Sp 2 (C,Tp,cs,Ct) is sound and complete for the set of projective dependency trees (over some label set L) and has been used, in slightly different variants, by a number of transition-based dependency parsers (Yamada and Matsumoto, 2003; Nivre, 2004; Attardi, 2006;
Transitions for Dependency Parsing
Given the simplicity of the extension, it is rather remarkable that the system Su 2 (G, Tu, cs, Gt) is sound and complete for the set of all dependency trees (over some label set L), including all non-projective trees.
Transitions for Dependency Parsing
For completeness, we note first that projectiv-ity is not a property of a dependency tree in itself, but of the tree in combination with a word order, and that a tree can always be made projective by reordering the nodes.
dependency tree is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Hirao, Tsutomu and Suzuki, Jun and Isozaki, Hideki
Analysis of reference compressions
Human usually compress sentences by dropping the intermediate nodes in the dependency tree .
Conclusions
0 We revealed that in compressing Japanese sentences, humans usually ignore syntactic structures; they drop intermediate nodes of the dependency tree and drop words within bansetsa,
Introduction
For Japanese, dependency trees are trimmed instead of full parse trees (Takeuchi and Matsumoto, 2001; Oguro et al., 2002; Nomoto, 2008)1 This parsing approach is reasonable because the compressed output is grammatical if the
Introduction
It treats a sentence as a sequence of words and structural information, such as a syntactic or dependency tree , is encoded in the sequence as features.
Introduction
However, they still rely on syntactic information derived from fully parsed syntactic or dependency trees .
Related work
For Japanese sentences, instead of using full parse trees, existing sentence compression methods trim dependency trees by the discrim-inative model (Takeuchi and Matsumoto, 2001; Nomoto, 2008) through the use of simple linear combined features (Oguro et a1., 2002).
Related work
They simply regard a sentence as a word sequence and structural information, such as full parse tree or dependency trees , are encoded in the sequence as features.
Related work
However, they still rely on syntactic information derived from full parsed trees or dependency trees .
dependency tree is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Galley, Michel and Manning, Christopher D.
Dependency parsing for machine translation
Second, dependency trees contain exactly one node per word, which contributes to cutting down the search space during parsing: indeed, the task of the parser is merely to connect existing nodes rather than hypothesizing new ones.
Dependency parsing for machine translation
Figure 1: A dependency tree with directed edges going from heads to modifiers.
Dependency parsing for machine translation
Their algorithm exploits the special properties of dependency trees to reduce the worst-case complexity of bilexical parsing, which otherwise requires 0(n4) for bilexical constituency-based parsing.
Introduction
The parsing literature presents faster alternatives for both phrase-structure and dependency trees , e.g., 0(n) shift-reduce parsers and variants ((Ratnaparkhi, 1997; Nivre, 2003), inter alia).
dependency tree is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Niu, Zheng-Yu and Wang, Haifeng and Wu, Hua
Experiments of Grammar Formalism Conversion
(2008) used WSJ section 19 from the Penn Treebank to extract DS to PS conversion rules and then produced dependency trees from WSJ section 22 for evaluation of their DS to PS conversion algorithm.
Experiments of Grammar Formalism Conversion
For comparison with their work, we conducted experiments in the same setting as theirs: using WSJ section 19 (1844 sentences) as Ops, producing dependency trees from WSJ section 22 (1700 sentences) as CD35, and using labeled bracketing f-scores from the tool
Introduction
Our conversion method achieves 93.8% f-score on dependency trees produced from WSJ section 22, resulting in 42% error reduction over the previous best result for DS to PS conversion.
Our Two-Step Solution
Previous DS to PS conversion methods built a converted tree by iteratively attaching nodes and edges to the tree with the help of conversion rules and heuristic rules, based on current head-dependent pair from a source dependency tree and the structure of the built tree (Collins et al., 1999; Covington, 1994; Xia and Palmer, 2001; Xia et al., 2008).
Related Work
Moreover, they presented two strategies to solve the problem that there might be multiple conversion rules matching the same input dependency tree pattern: (1) choosing the most frequent rules, (2) preferring rules that add fewer number of nodes and attach the subtree lower.
dependency tree is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Das, Dipanjan and Smith, Noah A.
Introduction
In this paper, we adopt a model that posits correspondence between the words in the two sentences, defining it in loose syntactic terms: if two sentences are paraphrases, we expect their dependency trees to align closely, though some divergences are also expected, with some more likely than others.
QG for Paraphrase Modeling
A dependency tree on a sequence w 2 (ml, ..., wk) is a mapping of indices of words to indices of syntactic parents, 7p : {1, —> {0, ..., k}, and a mapping of indices of words to dependency relation types in £, 7] : {1, ..., k} —> £.
QG for Paraphrase Modeling
(2005), trained on sections 2—21 of the WSJ Penn Treebank, transformed to dependency trees following Yamada and Matsumoto (2003).
dependency tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Druck, Gregory and Mann, Gideon and McCallum, Andrew
Generalized Expectation Criteria
3.2 Non-Projective Dependency Tree CRFs
Generalized Expectation Criteria
3.3 GE for Non-Projective Dependency Tree CRFs
Introduction
In this paper we use a non-projective dependency tree CRF (Smith and Smith, 2007).
dependency tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: