Index of papers in Proc. ACL 2012 that mention
  • parse tree
Shindo, Hiroyuki and Miyao, Yusuke and Fujino, Akinori and Nagata, Masaaki
Background and Related Work
Our SR-TSG work is built upon recent work on Bayesian TSG induction from parse trees (Post and Gildea, 2009; Cohn et al., 2010).
Background and Related Work
A derivation is a process of forming a parse tree .
Background and Related Work
Figure la shows an example parse tree and Figure lb shows its example TSG derivation.
Inference
We use Markov Chain Monte Carlo (MCMC) sampling to infer the SR-TSG derivations from parse trees .
Inference
We first infer latent symbol subcategories for every symbol in the parse trees , and then infer latent substitution sites stepwise.
Inference
After that, we unfiX that assumption and infer latent substitution sites given symbol-refined parse trees .
Symbol-Refined Tree Substitution Grammars
As with previous work on TSG induction, our task is the induction of SR-TSG derivations from a corpus of parse trees in an unsupervised fashion.
Symbol-Refined Tree Substitution Grammars
That is, we wish to infer the symbol subcategories of every node and substitution site (i.e., nodes where substitution occurs) from parse trees .
parse tree is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Yang, Nan and Li, Mu and Zhang, Dongdong and Yu, Nenghai
Abstract
In this work, we further extend this line of exploration and propose a novel but simple approach, which utilizes a ranking model based on word order precedence in the target language to reposition nodes in the syntactic parse tree of a source sentence.
Experiments
None means the original sentences without reordering; Oracle means the best permutation allowed by the source parse tree ; ManR refers to manual reorder rules; Rank means ranking reordering model.
Experiments
On the other hand, the performance of the ranking reorder model still fall far short of oracle, which is the lowest crossing-link number of all possible permutations allowed by the parse tree .
Introduction
The most notable solution to this problem is adopting syntaX-based SMT models, especially methods making use of source side syntactic parse trees .
Introduction
One is tree-to-string model (Quirk et al., 2005; Liu et al., 2006) which directly uses source parse trees to derive a large set of translation rules and associated model parameters.
Introduction
The other is called syntax pre-reordering — an approach that re-positions source words to approximate target language word order as much as possible based on the features from source syntactic parse trees .
Word Reordering as Syntax Tree Node Ranking
Given a source side parse tree T6, the task of word reordering is to transform Te to T4, so that 6’ can match the word order in target language as much as possible.
Word Reordering as Syntax Tree Node Ranking
By permuting tree nodes in the parse tree , the source sentence is reordered into the target language order.
Word Reordering as Syntax Tree Node Ranking
parse tree , we can obtain the same word order of Japanese translation.
parse tree is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Chen, Xiao and Kit, Chunyu
Abstract
This paper presents a higher-order model for constituent parsing aimed at utilizing more local structural context to decide the score of a grammar rule instance in a parse tree .
Conclusion
This paper has presented a higher-order model for constituent parsing that factorizes a parse tree into larger parts than before, in hopes of increasing its power of discriminating the true parse from the others without losing tractability.
Higher-order Constituent Parsing
Figure l: A part of a parse tree centered at NP —> NP VP
Higher-order Constituent Parsing
A part in a parse tree is illustrated in Figure 1.
Introduction
Previous discriminative parsing models usually factor a parse tree into a set of parts.
Introduction
It allows multiple adjacent grammar rules in each part of a parse tree , so as to utilize more local structural context to decide the plausibility of a grammar rule instance.
parse tree is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Stern, Asher and Stern, Roni and Dagan, Ido and Felner, Ariel
Background
We focus on methods that perform transformations over parse trees , and highlight the search challenge with which they are faced.
Background
In our domain, each state is a parse tree , which is expanded by performing all applicable transformations.
Search for Textual Inference
Let t be a parse tree , and let 0 be a transformation.
Search for Textual Inference
Denoting by tT and tH the text parse tree and the hypothesis parse tree , a proof system has to find a sequence 0 with minimal cost such that tT lO m. This forms a search problem of finding the lowest-cost proof among all possible proofs.
Search for Textual Inference
Next, for a transformation 0, applied on a parse tree If, we define arequiredfi, 0) as the subset of 75’s nodes required for applying 0 (i.e., in the absence of these nodes, 0 could not be applied).
parse tree is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Constant, Matthieu and Sigogne, Anthony and Watrin, Patrick
Evaluation
In order to compare both approaches, parse trees generated by BKYc were automatically transformed in trees with the same MWE annotation scheme as the trees generated by BKY.
MWE-dedicated Features
The reranker templates are instantiated only for the nodes of the candidate parse tree , which are leaves dominated by a MWE node (i.e.
MWE-dedicated Features
dominated by a MWE node m in the current parse tree p,
parse tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Feng, Vanessa Wei and Hirst, Graeme
Method
Since EDU boundaries are highly correlated with the syntactic structures embedded in the sentences, EDU segmentation is a relatively trivial step — using machine- generated syntactic parse trees , HILDA achieves an F -score of 93.8% for EDU segmentation.
Method
HILDA’s features: We incorporate the original features used in the HILDA discourse parser with slight modification, which include the following four types of features occurring in SL, SR, or both: (1) N-gram prefixes and suffixes; (2) syntactic tag prefixes and suffixes; (3) lexical heads in the constituent parse tree ; and (4) PCS tag of the dominating nodes.
Related work
They showed that the production rules extracted from constituent parse trees are the most effective features, while contextual features are the weakest.
parse tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Takamatsu, Shingo and Sato, Issei and Nakagawa, Hiroshi
Experiments
We used syntactic features (i.e., features obtained from the dependency parse tree of a sentence) and lexical features, and entity types, which essentially correspond to the ones developed by Mintz et a1.
Knowledge-based Distant Supervision
Since two entities mentioned in a sentence do not always have a relation, we select entity pairs from a corpus when: (i) the path of the dependency parse tree between the corresponding two named entities in the sentence is no longer than 4 and (ii) the path does not contain a sentence-like boundary, such as a relative clause1 (Banko et al., 2007; Banko and Etzioni, 2008).
Wrong Label Reduction
We define a pattern as the entity types of an entity pair2 as well as the sequence of words on the path of the dependency parse tree from the first entity to the second one.
parse tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Yamangil, Elif and Shieber, Stuart
Inference
Following previous work, we design a blocked Metropolis-Hastings sampler that samples derivations per entire parse trees all at once in a joint fashion (Cohn and Blunsom, 2010; Shindo et al., 2011).
Introduction
Recent work that incorporated Dirichlet process (DP) nonparametric models into TSGs has provided an efficient solution to the problem of segmenting training data trees into elementary parse tree fragments to form the grammar (Cohn et al., 2009; Cohn and Blunsom, 2010; Post and Gildea, 2009).
Introduction
Figure 2: TIG-to-TSG transform: (a) and (b) illustrate transformed TSG derivations for two different TIG derivations of the same parse tree structure.
parse tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: