Index of papers in Proc. ACL 2014 that mention
  • Berkeley parser
Hall, David and Durrett, Greg and Klein, Dan
Annotations
While we do not do as well as the Berkeley parser , we will see in Section 6 that our parser does a substantially better job of generalizing to other languages.
Other Languages
We show that this is indeed the case: on nine languages, our system is competitive with or better than the Berkeley parser , which is the best single
Other Languages
We compare to the Berkeley parser (Petrov and Klein, 2007) as well as two variants.
Other Languages
(2013) (Berkeley-Rep), which is their best single parser.5 The “Replaced” system modifies the Berkeley parser by replacing rare words with morphological descriptors of those words computed using language-specific modules, which have been handcrafted for individual languages or are trained with additional annotation layers in the treebanks that we do not exploit.
Berkeley parser is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Li, Zhenghua and Zhang, Min and Chen, Wenliang
Experiments and Analysis
Default parameter settings are used for training ZPar and Berkeley Parser .
Experiments and Analysis
For Berkeley Parser , we use the model after 5 split-merge iterations to avoid over-fitting the training data according to the manual.
Experiments and Analysis
The phrase-structure outputs of Berkeley Parser are converted into dependency structures using the same head-finding rules.
Berkeley parser is mentioned in 15 sentences in this paper.
Topics mentioned in this paper:
Andreas, Jacob and Klein, Dan
Experimental setup
We use the Maryland implementation of the Berkeley parser as our baseline for the kernel-smoothed lexicon, and the Maryland featured parser as our baseline for the embedding-featured lexicon.1 For all experiments, we use 50-dimensional word embeddings.
Experimental setup
For each training corpus size we also choose a different setting of the number of splitting iterations over which the Berkeley parser is run; for 300 sentences this is two splits, and for
Parser extensions
For the experiments in this paper, we will use the Berkeley parser (Petrov and Klein, 2007) and the related Maryland parser (Huang and Harper, 2011).
Parser extensions
The Berkeley parser induces a latent, state-split PCFG in which each symbol V of the (observed) X-bar grammar is refined into a set of more specific symbols {V1, V2, .
Parser extensions
First, these parsers are among the best in the literature, with a test performance of 90.7 F1 for the baseline Berkeley parser on the Wall Street Journal corpus (compared to 90.4 for Socher et al.
Three possible benefits of word embeddings
These are precisely the kinds of distinctions between determiners that state-splitting in the Berkeley parser has shown to be useful (Petrov and Klein, 2007), and existing work (Mikolov et al., 2013b) has observed that such regular embedding structure extends to many other parts of speech.
Berkeley parser is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Cai, Jingsheng and Utiyama, Masao and Sumita, Eiichiro and Zhang, Yujie
Experiments
The Berkeley Parser (Petrov et al., 2006) was employed for parsing the Chinese sentences.
Experiments
For training the Berkeley Parser , we used Chinese Treebank (CTB) 7.0.
Experiments
We conducted our dependency-based pre-ordering experiments on the Berkeley Parser and the Mate Parser (Bohnet, 2010), which were shown to be the two best parsers for Stanford typed dependencies (Che et al., 2012).
Berkeley parser is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Duan, Manjuan and White, Michael
Analysis and Discussion
Given that with the more refined SVM ranker, the Berkeley parser worked nearly as well as all three parsers together using the complete feature set, the prospects for future work on a more realistic scenario using the OpenCCG parser in an SVM ranker for self-monitoring now appear much more promising, either using OpenCCG’s reimplemen-tation of Hockenmaier & Steedman’s generative CCG model, or using the Berkeley parser trained on OpenCCG’s enhanced version of the CCGbank, along the lines of Fowler and Penn (2010).
Reranking with SVMs 4.1 Methods
Finally, since the Berkeley parser yielded the best results on its own, we also tested models using all the feature classes but only using this parser by itself.
Reranking with SVMs 4.1 Methods
Somewhat surprisingly, the Berkeley parser did as well as all three parsers using just the overall precision and recall features, but not quite as well using all features.
Simple Reranking
We chose the Berkeley parser (Petrov et al., 2006), Brown parser (Chamiak and Johnson, 2005) and Stanford parser (Klein and Manning, 2003) to parse the realizations generated by the
Simple Reranking
Simple ranking with the Berkeley parser of the generative model’s n-best realizations raised the BLEU score from 85.55 to 86.07, well below the averaged perceptron model’s BLEU score of 87.93.
Berkeley parser is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Hall, David and Berg-Kirkpatrick, Taylor and Klein, Dan
Conclusion
The Berkeley parser’s grammars—by virtue of being unlexicalized—can be applied uniformly to all parse items.
Introduction
Their system uses a grammar based on the Berkeley parser (Petrov and Klein, 2007) (which is particularly amenable to GPU processing), “compiling” the grammar into a sequence of GPU kernels that are applied densely to every item in the parse chart.
Minimum Bayes risk parsing
It is of course important verify the correctness of our system; one easy way to do so is to examine parsing accuracy, as compared to the original Berkeley parser .
Minimum Bayes risk parsing
These results are nearly identical to the Berkeley parsers most comparable numbers: 89.8 for Viterbi, and 90.9 for their “Max-Rule-Sum” MBR algorithm.
Berkeley parser is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: