Index of papers in Proc. ACL 2009 that mention
  • Berkeley parser
Cheung, Jackie Chi Kit and Penn, Gerald
A Latent Variable Parser
For our experiments, we used the latent variable-based Berkeley parser (Petrov et al., 2006).
A Latent Variable Parser
The Berkeley parser automates the process of finding such distinctions.
A Latent Variable Parser
The Berkeley parser has been applied to the TuBaD/Z corpus in the constituent parsing shared task of the ACL-2008 Workshop on Parsing German (Petrov and Klein, 2008), achieving an F1-measure of 85.10% and 83.18% with and without gold standard POS tags respectively2.
Abstract
We report the results of topological field parsing of German using the unlexicalized, latent variable-based Berkeley parser (Petrov et al., 2006) Without any language- or model-dependent adaptation, we achieve state-of—the-art results on the TuBa-D/Z corpus, and a modified NEGRA corpus that has been automatically annotated with topological fields (Becker and Frank, 2002).
Introduction
To facilitate comparison with previous work, we also conducted experiments on a modified NEGRA corpus that has been automatically annotated with topological fields (Becker and Frank, 2002), and found that the Berkeley parser outperforms the method described in that work.
Introduction
This model includes several enhancements, which are also found in the Berkeley parser .
Introduction
DTR is comparable to the idea of latent variable grammars on which the Berkeley parser is based, in that both consider the observed treebank to be less than ideal and both attempt to refine it by splitting and merging nonterminals.
Berkeley parser is mentioned in 15 sentences in this paper.
Topics mentioned in this paper:
Clark, Stephen and Curran, James R.
Abstract
We show that the conversion is extremely difficult to perform, but are able to fairly compare the parsers on a representative subset of the PTB test section, obtaining results for the CCG parser that are statistically no different to those for the Berkeley parser .
Conclusion
One question that is often asked of the CCG parsing work is “Why not convert back into the PTB representation and perform a Parseval evaluation?” By showing how difficult the conversion is, we believe that we have finally answered this question, as well as demonstrating comparable performance with the Berkeley parser .
Evaluation
The Berkeley parser (Petrov and Klein, 2007) provides performance close to the state-of-the-art for the PTB parsing task, with reported F-scores of around 90%.
Evaluation
As can be seen from the scores, these sentences form a slightly easier subset than the full section ()0, but this is a subset which can be used for a fair comparison against the Berkeley parser , since the conversion process is not lossy for this subset.
Evaluation
We compare the CCG parser to the Berkeley parser using the accurate mode of the Berkeley parser , together with the model supplied with the publicly available version.
Introduction
PTB parser we use for comparison is the publicly available Berkeley parser (Petrov and Klein, 2007).
Berkeley parser is mentioned in 9 sentences in this paper.
Topics mentioned in this paper: