Index of papers in Proc. ACL 2011 that mention
  • parse trees
Tan, Ming and Zhou, Wenli and Zheng, Lei and Wang, Shaojun
Composite language model
Figure 1: A composite n-gram/m-SLM/PLSA language model where the hidden information is the parse tree T and semantic content 9.
Training algorithm
the lth sentence Wl with its parse tree structure Tl
Training algorithm
of tag 75 predicted by word 21) and the tags of m most recent exposed headwords in parse tree Tl of the lth sentence Wl in document d, and finally #(ahjn, Wl, Tl, d) is the count of constructor move a conditioning on m exposed headwords bin in parse tree Tl of the lth sentence Wl in document d.
Training algorithm
For a given sentence, its parse tree and semantic content are hidden and the number of parse trees grows faster than exponential with sentence length, Wang et al.
parse trees is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Zhao, Bing and Lee, Young-Suk and Luo, Xiaoqiang and Li, Liu
Abstract
We propose a novel technique of learning how to transform the source parse trees to improve the translation qualities of syntax-based translation models using synchronous context-free grammars.
Decoding
Given a grammar G, and the input source parse tree 7r from a monolingual parser, we first construct the elementary tree for a source span, and then retrieve all the relevant subgraphs seen in the given grammar through the proposed operators.
Elementary Trees to String Grammar
We propose to use variations of an elementary tree, which is a connected sub graph fitted in the original monolingual parse tree .
Elementary Trees to String Grammar
where of is a set of frontier nodes which contain nonterminals or words; of are the interior nodes with source la-bels/symbols; E is the set of edges connecting the nodes 12 = of +vi into a connected subgraph fitted in the source parse tree ; 6 is the immediate common parent of the frontier nodes of .
Experiments
There are 16 thousand human parse trees with human alignment; additional 1 thousand human parse and aligned sent-pairs are used as unseen test set to verify our MaxEnt models and parsers.
Introduction
For instance, in Arabic—to—English translation, we find only 45.5% of Arabic NP-SBJ structures are mapped to the English NP-SBJ with machine alignment and parse trees, and only 60.1% of NP-SBJs are mapped with human alignment and parse trees as in § 2.
Introduction
Mi and Huang (2008) introduced parse forests to blur the chunking decisions to a certain degree, to expand search space and reduce parsing errors from l-best trees (Mi et al., 2008); others tried to use the parse trees as soft constraints on top of unlabeled grammar such as Hiero (Marton and Resnik, 2008; Chiang, 2010; Huang et al., 2010; Shen et al., 2010) without sufficiently leveraging rich tree context.
Introduction
On the basis of our study on investigating the language divergence between Arabic-English with human aligned and parsed data, we integrate several simple statistical operations, to transform parse trees adaptively to serve the
The Projectable Structures
We carried out a controlled study on the projectable structures using human annotated parse trees and word alignment for 5k Arabic—English sentence-pairs.
The Projectable Structures
In Table 1, the unlabeled F-measures with machine alignment and parse trees show that, for only 48.71% of the time, the boundaries introduced by the source parses
The Projectable Structures
Table 1: The labeled and unlabeled F-measures for projecting the source nodes onto the target side via alignments and parse trees ; unlabeled F—measures show the bracketing accuracies for translating a source span contiguously.
parse trees is mentioned in 15 sentences in this paper.
Topics mentioned in this paper:
Mylonakis, Markos and Sima'an, Khalil
Abstract
The key assumption behind many approaches is that translation is guided by the source and/or target language parse, employing rules extracted from the parse tree or performing tree transformations.
Conclusions
A further promising direction is broadening this set with labels taking advantage of both source and target-language linguistic annotation or categories exploring additional phrase-pair properties past the parse trees such as semantic annotations.
Experiments
The results in Table 2(a) indicate that a large part of the performance improvement can be attributed to the use of the linguistic annotations extracted from the source parse trees , indicating the potential of the LTS system to take advantage of such additional annotations to deliver better translations.
Introduction
Recent research tries to address these issues, by restructuring training data parse trees to better suit syntax-based SMT training (Wang et al., 2010), or by moving from linguistically motivated synchronous grammars to systems where linguistic plausibility of the translation is assessed through additional features in a phrase-based system (Venugopal et al., 2009; Chiang et al., 2009), obscuring the impact of higher level syntactic processes.
Related Work
Earlier approaches for linguistic syntax-based translation such as (Yamada and Knight, 2001; Galley et al., 2006; Huang et al., 2006; Liu et al., 2006) focus on memorising and reusing parts of the structure of the source and/or target parse trees and constraining decoding by the input parse tree .
parse trees is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Bodenstab, Nathan and Dunlop, Aaron and Hall, Keith and Roark, Brian
Background
We define an edge’s figure-of-merit (FOM) as an estimate of the product of its inside (6) and outside (04) scores, conceptually the relative merit the edge has to participate in the final parse tree (see Figure 1).
Background
predictions about the unlabeled constituent structure of the target parse tree .
Beam-Width Prediction
The optimal point will necessarily be very conservative, allowing outliers (sentences or sub-phrases with above average ambiguity) to stay within the beam and produce valid parse trees .
Introduction
Exhaustive search for the maximum likelihood parse tree with a state-of-the-art grammar can require over a minute of processing for a single sentence of 25 words, an unacceptable amount of time for real-time applications or when processing millions of sentences.
parse trees is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Chan, Yee Seng and Roth, Dan
Mention Extraction System
We extract the label of the parse tree constituent (if it exists) that exactly covers the mention, and also labels of all constituents that covers the mention.
Mention Extraction System
From a sentence, we gather the following as candidate mentions: all nouns and possessive pronouns, all named entities annotated by the the NE tagger (Ratinov and Roth, 2009), all base noun phrase (NP) chunks, all chunks satisfying the pattern: NP (PP NP)+, all NP constituents in the syntactic parse tree , and from each of these constituents, all substrings consisting of two or more words, provided the sub-strings do not start nor end on punctuation marks.
Syntactico-Semantic Structures
with lw of m,- in the sentence Syntactic parse-label of parse tree constituent parse that exactly covers m,-
Syntactico-Semantic Structures
parse-labels of parse tree constituents covering 771,;
parse trees is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zollmann, Andreas and Vogel, Stephan
Conclusion and discussion
with the model of Zollmann and Venugopal (2006), using heuristically generated labels from parse trees .
Introduction
(2006), target language parse trees are used to identify rules and label their nonterminal symbols, while Liu et al.
Introduction
(2006) use source language parse trees instead.
Introduction
Zollmann and Venugopal (2006) directly extend the rule extraction procedure from Chiang (2005) to heuristically label any phrase pair based on target language parse trees .
parse trees is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Abu-Jbara, Amjad and Radev, Dragomir
Approach
Figure 1 shows a portion of the parse tree for Sentence (1) (from Section 1).
Approach
We extract the scope of the reference from the parse tree as follows.
Approach
For example, the parse tree shown in Figure 1 suggests that the scope of the reference is:
parse trees is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Berg-Kirkpatrick, Taylor and Gillick, Dan and Klein, Dan
Efficient Prediction
Variables 20 indicate edges in the parse tree that have been cut in order to remove subtrees.
Joint Model
Variables yn indicate the presence of parse tree nodes.
Joint Model
We represent a compressive summary as a vector y = (yn : n E 258, s E c) of indicators, one for each nonterminal node in each parse tree of the sentences in the document set c. A word is present in the output summary if and only if its parent parse tree node n has yn = 1 (see Figure lb).
parse trees is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Wu, Xianchao and Matsuzaki, Takuya and Tsujii, Jun'ichi
Abstract
Therefore, the proposed approach can not only capture source-tree-to-target-chunk correspondences but can also use forest structures that compactly encode an exponential number of parse trees to properly generate target function words during decoding.
Backgrounds
3The forest includes three parse trees rooted at CD, cl, and c2.
Composed Rule Extraction
0 C(21): the complement span of U, which is the union of corresponding spans of nodes 21’ that share an identical parse tree with 2) but are neither antecedents nor descendants of v;
parse trees is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhang, Hao and Fang, Licheng and Xu, Peng and Wu, Xiaoyun
Forest-to-string Translation
The search problem is finding the derivation with the highest probability in the space of all derivations for all parse trees for an input sentence.
Introduction
Second, the parse tree is restructured using our binarization algorithm, resulting in a binary packed forest.
Source Tree Binarization
In a correct English parse tree , however, the subject-verb boundary is between “There” and “is”.
parse trees is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: