Index of papers in Proc. ACL 2011 that mention
  • constituent parsing
Bansal, Mohit and Klein, Dan
Abstract
We then integrate our features into full-scale dependency and constituent parsers .
Abstract
We show relative error reductions of 7.0% over the second-order dependency parser of McDonald and Pereira (2006), 9.2% over the constituent parser of Petrov et al.
Analysis
We next investigate the features that were given high weight by our learning algorithm (in the constituent parsing case).
Introduction
For constituent parsing , we rerank the output of the Berkeley parser (Petrov et al., 2006).
Introduction
To show end-to-end effectiveness, we incorporate our features into state-of-the-art dependency and constituent parsers .
Introduction
For constituent parsing , we use a reranking framework (Charniak and Johnson, 2005; Collins and Koo, 2005; Collins, 2000) and show 9.2% relative error reduction over the Berkeley parser baseline.
Parsing Experiments
We then add them to a constituent parser in a reranking approach.
Parsing Experiments
4.2 Constituent Parsing
Parsing Experiments
We also evaluate the utility of web-scale features on top of a state-of—the-art constituent parser — the Berkeley parser (Petrov et al., 2006), an unlexical-ized phrase-structure parser.
Web-count Features
1For constituent parsers , there can be minor tree variations which can result in the same set of induced dependencies, but these are rare in comparison.
constituent parsing is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Bodenstab, Nathan and Dunlop, Aaron and Hall, Keith and Roark, Brian
Introduction
Statistical constituent parsers have gradually increased in accuracy over the past ten years.
Introduction
Although syntax is becoming increasingly important for large-scale NLP applications, constituent parsing is slow—too slow to scale to the size of many potential consumer applications.
Introduction
Deterministic algorithms for dependency parsing exist that can extract syntactic dependency structure very quickly (Nivre, 2008), but this approach is often undesirable as constituent parsers are more accurate and more adaptable to new domains (Petrov et al., 2010).
constituent parsing is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Ponvert, Elias and Baldridge, Jason and Erk, Katrin
CD
5 Constituent parsing with a cascade of chunkers
CD
We use cascades of chunkers for full constituent parsing , building hierarchical constituents bottom-up.
Data
We use the standard data sets for unsupervised constituency parsing research: for English, the Wall Street Journal subset of the Penn Treebank-3 (WSJ, Marcus et al.
Introduction
This result suggests that improvements to low-level constituent prediction will ultimately lead to further gains in overall constituent parsing .
Tasks and Benchmark
portantly, until recently it was the only unsupervised raw text constituent parser to produce results competitive with systems which use gold POS tags (Klein and Manning, 2002; Klein and Manning, 2004; Bod, 2006) — and the recent improved raw-text parsing results of Reichart and Rappoport (2010) make direct use of CCL without modification.
constituent parsing is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Berg-Kirkpatrick, Taylor and Gillick, Dan and Klein, Dan
Experiments
Constituency parses were produced using the Berkeley parser (Petrov and Klein, 2007).
Joint Model
In our complete model, which jointly extracts and compresses sentences, we choose whether or not to cut individual subtrees in the constituency parses
Joint Model
Assume a constituency parse 758 for every sentence 3.
Joint Model
While we use constituency parses rather than dependency parses, this model has similarities with the vine-growth model of Daume III (2006).
constituent parsing is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: