Index of papers in Proc. ACL 2011 that mention
  • Berkeley parser
Bansal, Mohit and Klein, Dan
Analysis
9Default: we ran the Berkeley parser in its default ‘fast’ mode; the output kr-best lists are ordered by max-rule-score.
Analysis
Table 3: Parsing results for reranking 50-best lists of Berkeley parser (Dev is WSJ section 22 and Test is WSJ section 23, all lengths).
Introduction
For example, in the Berkeley parser (Petrov et al., 2006), about 20% of the errors are prepositional phrase attachment errors as in Figure l, where a preposition-headed (IN) phrase was assigned an incorrect parent in the implied dependency tree.
Introduction
Here, the Berkeley parser (solid blue edges) incorrectly attaches from debt to the noun phrase $ 30 billion whereas the correct attachment (dashed gold edges) is to the verb raising.
Introduction
Figure l: A PP attachment error in the parse output of the Berkeley parser (on Penn Treebank).
Parsing Experiments
We also evaluate the utility of web-scale features on top of a state-of—the-art constituent parser — the Berkeley parser (Petrov et al., 2006), an unlexical-ized phrase-structure parser.
Parsing Experiments
Our baseline system is the Berkeley parser , from which we obtain k-best lists for the development set (WSJ section 22) and test set (WSJ section 23) using a grammar trained on all the training data (WSJ sections 2-21).8 To get k-best lists for the training set, we use 3-fold jackknifing where we train a grammar
Parsing Experiments
Table 2: Oracle Fl-scores for kr-best lists output by Berkeley parser for English WSJ parsing (Dev is section 22 and Test is section 23, all lengths).
Berkeley parser is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Bodenstab, Nathan and Dunlop, Aaron and Hall, Keith and Roark, Brian
Abstract
We demonstrate that our method is faster than coarse-to-fine pruning, exemplified in both the Charniak and Berkeley parsers, by empirically comparing our parser to the Berkeley parser using the same grammar and under identical operating conditions.
Conclusion and Future Work
2We run the Berkeley parser with the default search parameterization to achieve the fastest possible parsing time.
Conclusion and Future Work
Using this framework, we have shown that we can decrease parsing time by 65% over a standard beam-search without any loss in accuracy, and parse significantly faster than both the Berkeley parser and Chart Constraints.
Results
Both our parser and the Berkeley parser are written in Java, both are run with Viterbi decoding, and both parse with the same grammar, so a direct comparison of speed and accuracy is fair.2
Berkeley parser is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: