Index of papers in Proc. ACL 2008 that mention
  • parse tree
Gómez-Rodr'iguez, Carlos and Carroll, John and Weir, David
Dependency parsing schemata
Parse tree: A partial dependency tree 75 E D-trees is a parse tree for a given string wl .
Dependency parsing schemata
.Qn, we will say it is a projective parse tree for the string.
Dependency parsing schemata
Final items in this formalism will be those containing some forest F containing a parse tree for some arbitrary string.
Introduction
Each item contains a piece of information about the sentence’s structure, and a successful parsing process will produce at least one final item containing a full parse tree for the sentence or guaranteeing its existence.
Introduction
Items in parsing schemata are formally defined as sets of partial parse trees from a set denoted
Introduction
Trees(G), which is the set of all the possible partial parse trees that do not violate the constraints imposed by a grammar G. More formally, an item set I is defined by Sikkel as a quotient set associated with an equivalence relation on Trees(G).1
parse tree is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Lee, John and Seneff, Stephanie
Abstract
A basic approach is template matching on parse trees .
Abstract
To improve recall, irregularities in parse trees caused by verb form errors are taken into account; to improve precision, n-gram counts are utilized to filter proposed corrections.
Data 5.1 Development Data
To investigate irregularities in parse tree patterns (see §3.2), we utilized the AQUAINT Corpus of English News Text.
Introduction
We build on the basic approach of template-matching on parse trees in two ways.
Introduction
To improve recall, irregularities in parse trees caused by verb form errors are considered; to improve precision, n-gram counts are utilized to filter proposed corrections.
Previous Research
Similar strategies with parse trees are pursued in (Bender et al., 2004), and error templates are utilized in (Heidom, 2000) for a word processor.
Previous Research
Relative to verb forms, errors in these categories do not “disturb” the parse tree as much.
Research Issues
The success of this strategy, then, hinges on accurate identification of these items, for example, from parse trees .
Research Issues
In other words, sentences containing verb form errors are more likely to yield an “incorrect” parse tree , sometimes with significant differences.
Research Issues
One goal of this paper is to recognize irregularities in parse trees caused by verb form errors, in order to increase recall.
parse tree is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Mi, Haitao and Huang, Liang and Liu, Qun
Abstract
Among syntax-based translation models, the tree-based approach, which takes as input a parse tree of the source sentence, is a promising direction being faster and simpler than its string-based counterpart.
Conclusion and future work
We have presented a novel forest-based translation approach which uses a packed forest rather than the 1-best parse tree (or k-best parse trees ) to direct the translation.
Experiments
Using more than one parse tree apparently improves the BLEU score, but at the cost of much slower decoding, since each of the top-k trees has to be decoded individually although they share many common subtrees.
Experiments
1' (rank of the parse tree picked by the decoder)
Experiments
Figure 5: Percentage of the i-th best parse tree being picked in decoding.
Forest-based translation
Informally, a packed parse forest, or forest in short, is a compact representation of all the derivations (i.e., parse trees ) for a given sentence under a context-free grammar (Billot and Lang, 1989).
Forest-based translation
The parse tree for the preposition case is shown in Figure 2(b) as the l-best parse, while for the conjunction case, the two proper nouns (Basin and Shalong) are combined to form a coordinated NP
Forest-based translation
Shown in Figure 3(a), these two parse trees can be represented as a single forest by sharing common subtrees such as NPB0,1 and VPB3,6.
Introduction
Depending on the type of input, these efforts can be divided into two broad categories: the string-based systems whose input is a string to be simultaneously parsed and translated by a synchronous grammar (Wu, 1997; Chiang, 2005; Galley et al., 2006), and the tree-based systems whose input is already a parse tree to be directly converted into a target tree or string (Lin, 2004; Ding and Palmer, 2005; Quirk et al., 2005; Liu et al., 2006; Huang et al., 2006).
Introduction
However, despite these advantages, current tree-based systems suffer from a major drawback: they only use the 1-best parse tree to direct the translation, which potentially introduces translation mistakes due to parsing errors (Quirk and Corston-Oliver, 2006).
parse tree is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Zhang, Min and Jiang, Hongfei and Aw, Aiti and Li, Haizhou and Tan, Chew Lim and Li, Sheng
Introduction
1 A tree sequence refers to an ordered subtree sequence that covers a phrase or a consecutive tree fragment in a parse tree .
Related Work
Yamada and Knight (2001) use noisy-channel model to transfer a target parse tree into a source sentence.
Related Work
(2006) propose a feature-based discriminative model for target language syntactic structures prediction, given a source parse tree .
Related Work
(2006) create an xRS rule headed by a pseudo, non-syntactic nonterminal symbol that subsumes the phrase and its corresponding multi-headed syntactic structure; and one sibling xRS rule that explains how the pseudo symbol can be combined with other genuine non-terminals for acquiring the genuine parse trees .
Tree Sequence Alignment Model
source and target parse trees T ( fl‘]) and T (ell ) in Fig.
Tree Sequence Alignment Model
2 illustrates two examples of tree sequences derived from the two parse trees .
Tree Sequence Alignment Model
and their parse trees T(f1‘]) and T (611 ) 9 the tree
parse tree is mentioned in 18 sentences in this paper.
Topics mentioned in this paper:
Kaufmann, Tobias and Pfister, Beat
Abstract
We propose a language model based on a precise, linguistically motivated grammar (a handcrafted Head-driven Phrase Structure Grammar) and a statistical model estimating the probability of a parse tree .
Conclusions and Outlook
first step in this direction by estimating the probability of a parse tree .
Conclusions and Outlook
However, our model only looks at the structure of a parse tree and does not take the actual words into account.
Experiments
As P(T) does not directly apply to parse trees , all possible readings have to be unpacked.
Experiments
For these lattices the grammar-based language model was simply switched off in the experiment, as no parse trees were produced for efficiency reasons.
Language Model 2.1 The General Approach
(2) Pyram(W) is defined as the probability of the most likely parse tree of a word sequence W: P W = P T 3 gram( ) Tepggefiw) ( > ( ) To determine Pyram(W) is an expensive operation as it involves parsing.
Language Model 2.1 The General Approach
2.2 The Probability of a Parse Tree
Language Model 2.1 The General Approach
The parse trees produced by our parser are binary-branching and rather deep.
parse tree is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Hoyt, Frederick and Baldridge, Jason
Deriving Eisner Normal Form
(30) For a set S of semantically equivalent2 parse trees for a string ABC, admit the unique parse tree such that at least one of (i) or (ii) holds:
Deriving Eisner Normal Form
(31) Theorem 1 : For every parse tree oz, there is a semantically equivalent parse-tree N F(a) in which no node resulting from application of B or S functions as the primary functor in a rule application.
Deriving Eisner Normal Form
(32) Theorem 2: If N F(a) and N F(o/ ) are distinct parse trees , then their model-theoretic interpretations are distinct.
parse tree is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Agirre, Eneko and Baldwin, Timothy and Martinez, David
Background
While a detailed description of the respective parsing models is beyond the scope of this paper, it is worth noting that both parsers induce a context free grammar as well as a generative parsing model from a training set of parse trees , and use a development set to tune internal parameters.
Experimental setting
One of the main requirements for our dataset is the availability of gold-standard sense and parse tree annotations.
Experimental setting
The gold-standard parse tree annotations are required in order to carry out evaluation of parser and PP attachment performance.
Experimental setting
Following Atterer and Schutze (2007), we wrote a script that, given a parse tree , identifies instances of PP attachment ambiguity and outputs the (v, n1 , p, n2) quadruple involved and the attachment decision.
Introduction
Traditionally, parse disambiguation has relied on structural features extracted from syntactic parse trees , and made only limited use of semantic information.
parse tree is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Zhang, Dongdong and Li, Mu and Duan, Nan and Li, Chi-Ho and Zhou, Ming
Experiments
might be incorrect due to errors in English parse trees .
Experiments
Given a source sentence, the corresponding syntax parse tree T S is first constructed with an English parser.
Experiments
The other problem comes from the English head word selection error introduced by using source parse trees .
Model Training and Application 3.1 Training
Based on the source syntax parse tree , for each measure word, we identified its head word by using a toolkit from (Chiang and Bikel, 2002) which can heuristically identify head words for sub-trees.
Our Method
The source head word feature is defined to be a function fl to indicate whether a word ei is the source head word in English according to a parse tree of the source sentence.
parse tree is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Vickrey, David and Koller, Daphne
Introduction
In the sentence “He expected to receive a prize for winning,” the path from “win” to its ARGO, “he”, involves the verbs “expect” and “receive” and the preposition “for.” The corresponding path through the parse tree likely occurs a relatively small number of times (or not at all) in the training corpus.
Simple Sentence Production
This procedure is quite expensive; we have to copy the entire parse tree at each step, and in general, this procedure could generate an exponential number of transformed parses.
Simplification Data Structure
In our case, the AND nodes are similar to constituent nodes in a parse tree — each has a category (e.g.
Transformation Rules
A transformation rule takes as input a parse tree and produces as output a different, changed parse tree .
parse tree is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Finkel, Jenny Rose and Kleeman, Alex and Manning, Christopher D.
Experiments
In Figure 3 we show for an example from section 22 the parse trees produced by our generative model and our feature-based discriminative model, and the correct parse.
The Model
of the parse tree , given the sentence, not joint likelihood of the tree and sentence; and (b) probabilities are normalized globally instead of locally —the graphical models depiction of our trees is undirected.
The Model
We define t"(s) to be the set of all possible parse trees for the given sentence licensed by the grammar G.
parse tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Goldberg, Yoav and Tsarfaty, Reut
A Generative PCFG Model
212,, and a morphological analyzer, we look for the most probable parse tree 7r s.t.
A Generative PCFG Model
Hence, our parser searches for a parse tree 7r over lexemes (ll H.119) s.t.
A Generative PCFG Model
Thus our proposed model is a proper model assigning probability mass to all (7r, L) pairs, where 7r is a parse tree and L is the one and only lattice that a sequence of characters (and spaces) W over our alpha-beth gives rise to.
parse tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Schulte im Walde, Sabine and Hying, Christian and Scheible, Christian and Schmid, Helmut
Summary and Outlook
Furthermore, we aim to use the verb class model in NLP tasks, (i) as resource for lexical induction of verb senses, verb alternations, and collocations, and (ii) as a lexical resource for the statistical disambiguation of parse trees .
Verb Class Model 2.1 Probabilistic Model
Figure 1: Example parse tree .
Verb Class Model 2.1 Probabilistic Model
(b) The training tuples are processed: For each tuple, a PCFG parse forest as indicated by Figure l is done, and the Inside-Outside algorithm is applied to estimate the frequencies of the ”parse tree rules”, given the current model probabilities.
parse tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhao, Shiqi and Wang, Haifeng and Liu, Ting and Li, Sheng
Proposed Method
Let SE be an English sentence, TE the parse tree of SE, 6 a word of SE, we define the subtree and partial subtree following the definitions in (Ouan-graoua et al., 2007).
Proposed Method
If e,-is a descendant of ej in the parse tree , we remove p05,- from PE(e).
Proposed Method
Note that the Chinese patterns are not extracted from parse trees .
parse tree is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: