Index of papers in Proc. ACL 2014 that mention
  • parse tree
Yogatama, Dani and Smith, Noah A.
Abstract
We introduce three linguistically motivated structured regularizers based on parse trees , topics, and hierarchical word clusters for text categorization.
Structured Regularizers for Text
Figure 1: An example of a parse tree from the Stanford sentiment treebank, which annotates sentiment at the level of every constituent (indicated here by —|— and ++; no marking indicates neutral sentiment).
Structured Regularizers for Text
4.2 Parse Tree Regularizer
Structured Regularizers for Text
Sentence boundaries are a rather superficial kind of linguistic structure; syntactic parse trees provide more fine-grained information.
parse tree is mentioned in 23 sentences in this paper.
Topics mentioned in this paper:
Li, Zhenghua and Zhang, Min and Chen, Wenliang
Abstract
Instead of only using 1-best parse trees in previous work, our core idea is to utilize parse forest (ambiguous labelings) to combine multiple l-best parse trees generated from diverse parsers on unlabeled data.
Abstract
1) ambiguity encoded in parse forests compromises noise in l-best parse trees .
Abstract
During training, the parser is aware of these ambiguous structures, and has the flexibility to distribute probability mass to its preferred parse trees as long as the likelihood improves.
Introduction
Both work employs two parsers to process the unlabeled data, and only select as extra training data sentences on which the 1-best parse trees of the two parsers are identical.
Introduction
Different from traditional self/co/tri-training which only use l-best parse trees on unlabeled data, our approach adopts ambiguous labelings, represented by parse forest, as gold-standard for unlabeled sentences.
Introduction
The forest is formed by two parse trees , respectively shown at the upper and lower sides of the sentence.
parse tree is mentioned in 24 sentences in this paper.
Topics mentioned in this paper:
Cai, Jingsheng and Utiyama, Masao and Sumita, Eiichiro and Zhang, Yujie
Dependency-based Pre-ordering Rule Set
Figure 1 shows a constituent parse tree and its Stanford typed dependency parse tree for the same
Dependency-based Pre-ordering Rule Set
As shown in the figure, the number of nodes in the dependency parse tree (i.e.
Dependency-based Pre-ordering Rule Set
9) is much fewer than that in its corresponding constituent parse tree (i.e.
Introduction
These pre-ordering approaches first parse the source language sentences to create parse trees .
Introduction
Then, syntactic reordering rules are applied to these parse trees with the goal of reordering the source language sentences into the word order of the target language.
Introduction
terrorism definition (a) A constituent parse tree
parse tree is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Parikh, Ankur P. and Cohen, Shay B. and Xing, Eric P.
Abstract
Cog-nitively, it is more plausible to assume that children obtain only terminal strings of parse trees and not the actual parse trees .
Abstract
Most existing solutions treat the problem of unsupervised parsing by assuming a generative process over parse trees e.g.
Abstract
Unlike in phylogenetics and graphical models, where a single latent tree is constructed for all the data, in our case, each part of speech sequence is associated with its own parse tree .
parse tree is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Wang, Zhiguo and Xue, Nianwen
Joint POS Tagging and Parsing with Nonlocal Features
Assuming an input sentence contains n words, in order to reach a terminal state, the initial state requires n sh—x actions to consume all words in 6, and n — l rl/rr—x actions to construct a complete parse tree by consuming all the subtrees in 0.
Joint POS Tagging and Parsing with Nonlocal Features
For example, the parse tree in Figure la contains no ru—x action, while the parse tree for the same input sentence in Figure lb contains four ru—x actions.
Joint POS Tagging and Parsing with Nonlocal Features
Input: A word-segmented sentence, beam size k. Output: A constituent parse tree .
Transition-based Constituent Parsing
empty stack 0 and a queue 6 containing the entire input sentence (word-POS pairs), and the terminal states have an empty queue 6 and a stack 0 containing only one complete parse tree .
Transition-based Constituent Parsing
In order to construct lexicalized constituent parse trees , we define the following actions for the action set T according to (Sagae and Lavie, 2005; Wang et al., 2006; Zhang and Clark, 2009):
Transition-based Constituent Parsing
For example, in Figure l, for the input sentence wowlwg and its POS tags abc, our parser can construct two parse trees using action sequences given below these trees.
parse tree is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Kalchbrenner, Nal and Grefenstette, Edward and Blunsom, Phil
Abstract
The network does not rely on a parse tree and is easily applicable to any language.
Background
A model that adopts a more general structure provided by an external parse tree is the Recursive Neural Network (RecNN) (Pollack, 1990; Kiichler and Goller, 1996; Socher et al., 2011; Hermann and Blunsom, 2013).
Background
It is sensitive to the order of the words in the sentence and it does not depend on external language-specific features such as dependency or constituency parse trees .
Experiments
RECNTN is a recursive neural network with a tensor-based feature function, which relies on external structural features given by a parse tree and performs best among the RecNNs.
Introduction
The feature graph induces a hierarchical structure somewhat akin to that in a syntactic parse tree .
Properties of the Sentence Model
The recursive neural network follows the structure of an external parse tree .
Properties of the Sentence Model
Likewise, the induced graph structure in a DCNN is more general than a parse tree in that it is not limited to syntactically dictated phrases; the graph structure can capture short or long-range semantic relations between words that do not necessarily correspond to the syntactic relations in a parse tree .
Properties of the Sentence Model
The DCNN has internal input-dependent structure and does not rely on externally provided parse trees , which makes the DCNN directly applicable to hard-to-parse sentences such as tweets and to sentences from any language.
parse tree is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Li, Junhui and Marton, Yuval and Resnik, Philip and Daumé III, Hal
Experiments
To obtain syntactic parse trees and semantic roles on the tuning and test datasets, we first parse the source sentences with the Berkeley Parser (Petrov and Klein, 2007), trained on the Chinese Treebank 7.0 (Xue et al., 2005).
Experiments
In order to understand how well the MR08 system respects their reordering preference, we use the gold alignment dataset LDC2006E86, in which the source sentences are from the Chinese Treebank, and thus both the gold parse trees and gold predicate-argument structures are available.
Related Work
(2012) obtained word order by using a reranking approach to reposition nodes in syntactic parse trees .
Unified Linguistic Reordering Models
According to the annotation principles in (Chinese) PropB ank (Palmer et al., 2005; Xue and Palmer, 2009), all the roles in a PAS map to a corresponding constituent in the parse tree , and these constituents (e.g., NPs and VBD in Figure 1) do not overlap with each other.
Unified Linguistic Reordering Models
parse tree and its word alignment links to the target language.
Unified Linguistic Reordering Models
Given a hypothesis H with its alignment a, it traverses all CFG rules in the parse tree and sees if two adjacent constituents are conditioned to trigger the reordering models (lines 2-4).
parse tree is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Zhang, Yuan and Lei, Tao and Barzilay, Regina and Jaakkola, Tommi and Globerson, Amir
Experimental Setup
We split the sentence based on the ending punctuation, predict the parse tree for each segment and group the roots of resulting trees into a single node.
Introduction
Because the number of alternatives is small, the scoring function could in principle involve arbitrary (global) features of parse trees .
Related Work
Reranking can be combined with an arbitrary scoring function, and thus can easily incorporate global features over the entire parse tree .
Sampling-Based Dependency Parsing with Global Features
Ideally, we would change multiple heads in the parse tree simultaneously, and sample those choices from the corresponding conditional distribution of p. While in general this is increasingly difficult with more heads, it is indeed tractable if
Sampling-Based Dependency Parsing with Global Features
3/ is always a valid parse tree if we allow multiple children of the root and do not impose projective constraint.
Sampling-Based Dependency Parsing with Global Features
We extend our model such that it jointly learns how to predict a parse tree and also correct the predicted POS tags for a better parsing performance.
parse tree is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Zhu, Xiaodan and Guo, Hongyu and Mohammad, Saif and Kiritchenko, Svetlana
Experiment setup
Data As described earlier, the Stanford Sentiment Treebank (Socher et al., 2013) has manually annotated, real-valued sentiment values for all phrases in parse trees .
Introduction
The recently available Stanford Sentiment Treebank (Socher et al., 2013) renders manually annotated, real-valued sentiment scores for all phrases in parse trees .
Related work
Such models work in a bottom-up fashion over the parse tree of a sentence to infer the sentiment label of the sentence as a composition of the sentiment expressed by its constituting parts.
Semantics-enriched modeling
A recursive neural tensor network (RNTN) is a specific form of feed-forward neural network based on syntactic (phrasal-structure) parse tree to conduct compositional sentiment analysis.
Semantics-enriched modeling
Each node of the parse tree is a fixed-length vector that encodes compositional semantics and syntax, which can be used to predict the sentiment of this node.
parse tree is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Nguyen, Minh Luan and Tsang, Ivor W. and Chai, Kian Ming A. and Chieu, Hai Leong
Experiments
The constituent parse trees were then transformed into dependency parse trees , using the head of each constituent (Jiang and Zhai, 2007b).
Problem Statement
We extract features from a sequence representation and a parse tree representation of each relation instance.
Problem Statement
Syntactic Features The syntactic parse tree of the relation instance sentence can be augmented to represent the relation instance.
Problem Statement
Each node in the sequence or the parse tree is augmented by an argument tag that indicates whether the node corresponds to entity A, B, both, or neither.
parse tree is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Gyawali, Bikash and Gardent, Claire
Generating from the KBGen Knowledge-Base
To extract a Feature-Based Lexicalised Tree Adjoining Grammar (FB-LTAG) from the KB Gen data, we parse the sentences of the training corpus; project the entity and event variables to the syntactic projection of the strings they are aligned with; and extract the elementary trees of the resulting FB-LTAG from the parse tree using semantic information.
Generating from the KBGen Knowledge-Base
After alignment, the entity and event variables occurring in the input semantics are associated with substrings of the yield of the syntactic parse tree .
Generating from the KBGen Knowledge-Base
Once entity and event variables have been projected up the parse trees , we extract elementary FB-LTAG trees and their semantics from the input scenario as follows.
parse tree is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Lee, Kenton and Artzi, Yoav and Dodge, Jesse and Zettlemoyer, Luke
Parsing Time Expressions
Figure l: A CCG parse tree for the mention “one week ago.” The tree includes forward (>) and backward (<) application, as well as two type-shifting operations
Parsing Time Expressions
The lexicon pairs words with categories and the combinators define how to combine categories to create complete parse trees .
Parsing Time Expressions
For example, Figure 1 shows a CCG parse tree for the phrase “one week ago.” The parse tree is read top to bottom, starting from assigning categories to words using the lexicon.
Resolution
Model Let y be a context-dependent CCG parse, which includes a parse tree TR(y), a set of context operations CNTX(y) applied to the logical form at the root of the tree, a final context-dependent logical form LF(y) and a TIMEX3 value Define gb(m, D, 3/) 6 Rd to be a d-dimensional feature—vector representation and 6 6 Rd to be a parameter vector.
parse tree is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Narayan, Shashi and Gardent, Claire
Abstract
First, it is semantic based in that it takes as input a deep semantic representation rather than e.g., a sentence or a parse tree .
Introduction
While previous simplification approaches starts from either the input sentence or its parse tree , our model takes as input a deep semantic representation namely, the Discourse Representation Structure (DRS, (Kamp, 1981)) assigned by Boxer (Curran et al., 2007) to the input complex sentence.
Related Work
Their simplification model encodes the probabilities for four rewriting operations on the parse tree of an input sentences namely, substitution, reordering, splitting and deletion.
Related Work
(2010) and the edit history of Simple Wikipedia, Woodsend and Lapata (2011) learn a quasi synchronous grammar (Smith and Eisner, 2006) describing a loose alignment between parse trees of complex and of simple sentences.
parse tree is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Lin, Chen and Miller, Timothy and Kho, Alvin and Bethard, Steven and Dligach, Dmitriy and Pradhan, Sameer and Savova, Guergana
Background
Figure 2: A parse tree (left) and its descending paths according to Definition 1 (l - length).
Methods
Definition 1 (Descending Path): Let T be a parse tree , 2) any nonterminal node in T, do a descendant of 2), including terminals.
Methods
Figure 2 illustrates a parse tree and its descending paths of different lengths.
parse tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Wang, Chang and Fan, James
Background
Many of them focus on using tree kernels to learn parse tree structure related features (Collins and Duffy, 2001; Culotta and Sorensen, 2004; Bunescu and Mooney, 2005).
Identifying Key Medical Relations
Figure 2: A Parse Tree Example
Identifying Key Medical Relations
Consider the sentence: “Antibiotics are the standard therapy for Lyme disease”: MedicalESG first generates a dependency parse tree (Figure 2) to represent grammatical relations between the words in the sentence, and then associates the words with CUIs.
parse tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Pighin, Daniele and Cornolti, Marco and Alfonseca, Enrique and Filippova, Katja
Heuristics-based pattern extraction
The input to the algorithm are a parse tree T and a set of target entities E. We first generate combinations of 1-3 elements of E (line 10), then for each combination 0 we identify all the nodes in T that mention any of the entities in C. We continue by constructing the MST of these nodes, and finally apply our heuristics to the nodes in the MST.
Memory-based pattern extraction
We highlighted in bold the path corresponding to the linearized form (b) of the example parse tree (a).
Pattern extraction by sentence compression
Instead, we chose to modify the method of Filippova and Altun (2013) because it relies on dependency parse trees and does not use any LM scoring.
parse tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Iyyer, Mohit and Enns, Peter and Boyd-Graber, Jordan and Resnik, Philip
Introduction
Figure 1: An example of compositionality in ideological bias detection (red —> conservative, blue —> liberal, gray —> neutral) in which modifier phrases and punctuation cause polarity switches at higher levels of the parse tree .
Recursive Neural Networks
Based on a parse tree , these words form phrases p (Figure 2).
Where Compositionality Helps Detect Ideological Bias
The increased accuracy suggests that the trained RNNs are capable of detecting bias polarity switches at higher levels in parse trees .
parse tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Hermann, Karl Moritz and Blunsom, Phil
Approach
Most prior work on learning compositional semantic representations employs parse trees on their training data to structure their composition functions (Socher et al., 2012; Hermann and Blunsom, 2013, inter alia).
Approach
While these methods have been shown to work in some cases, the need for parse trees and annotated data limits such approaches to resource-fortunate languages.
Overview
This removes a number of constraints that normally come with CVM models, such as the need for syntactic parse trees , word alignment or annotated data as a training signal.
parse tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Guzmán, Francisco and Joty, Shafiq and Màrquez, Llu'is and Nakov, Preslav
Abstract
We first design two discourse-aware similarity measures, which use all-subtree kernels to compare discourse parse trees in accordance with the Rhetorical Structure Theory.
Conclusions and Future Work
First, we defined two simple discourse-aware similarity metrics (lexicalized and un-lexicalized), which use the all-subtree kernel to compute similarity between discourse parse trees in accordance with the Rhetorical Structure Theory.
Experimental Setup
Combination of four metrics based on syntactic information from constituency and dependency parse trees : ‘CP—STM-4’, ‘DP-HWCM_c-4’, ‘DP—HWCM1-4’, and ‘DP-Or(*)’.
parse tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: