Index of papers in Proc. ACL 2009 that mention
  • parsing models
Zhang, Yi and Wang, Rui
Abstract
The dependency backbone of an HP SG analysis is used to provide general linguistic insights which, when combined with state-of-the-art statistical dependency parsing models , achieves performance improvements on out-domain testsflL
Dependency Parsing with HPSG
One is to extract dependency backbone from the HP SG analyses of the sentences and directly convert them into the target representation; the other way is to encode the HP SG outputs as additional features into the existing statistical dependency parsing models .
Experiment Results & Error Analyses
To evaluate the performance of our different dependency parsing models , we tested our approaches on several dependency treebanks for English in a similar spirit to the CoNLL 2006-2008 Shared Tasks.
Experiment Results & Error Analyses
The larger part is converted from the Penn Treebank Wall Street Journal Sections #2—#21, and is used for training statistical dependency parsing models ; the smaller part, which covers sentences from Section #23, is used for testing.
Introduction
In combination with machine learning methods, several statistical dependency parsing models have reached comparable high parsing accuracy (McDonald et al., 2005b; Nivre et al., 2007b).
Parser Domain Adaptation
Granted for the differences between their approaches, both systems heavily rely on machine learning methods to estimate the parsing model from an annotated corpus as training set.
Parser Domain Adaptation
Due to the heavy cost of developing high quality large scale syntactically annotated corpora, even for a resource-rich language like English, only very few of them meets the criteria for training a general purpose statistical parsing model .
Parser Domain Adaptation
Figure 1: Different dependency parsing models and their combinations.
parsing models is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Ganchev, Kuzman and Gillenwater, Jennifer and Taskar, Ben
Approach
After filtering to identify well-behaved sentences and high confidence projected dependencies, we learn a probabilistic parsing model using the posterior regularization framework (Graca et al., 2008).
Experiments
We conducted experiments on two languages: Bulgarian and Spanish, using each of the parsing models .
Parsing Models
We explored two parsing models : a generative model used by several authors for unsupervised induction and a discriminative model used for fully supervised training.
Parsing Models
The parsing model defines a conditional distribution p9(z | x) over each projective parse tree 2 for a particular sentence X, parameterized by a vector 6.
parsing models is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Ge, Ruifang and Mooney, Raymond
Ensuring Meaning Composition
Both subtasks require a training set of NLs paired with their MRs. Each NL sentence also requires a syntactic parse generated using Bikel’s (2004) implementation of Collins parsing model 2.
Experimental Evaluation
First Bikel’s implementation of Collins parsing model 2 was trained to generate syntactic parses.
Introduction
1Ge and Mooney (2005) use training examples with semantically annotated parse trees, and Zettlemoyer and Collins (2005) learn a probabilistic semantic parsing model
parsing models is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Niu, Zheng-Yu and Wang, Haifeng and Wu, Hua
Experiments of Parsing
Finally we evaluated two parsing models , the generative parser and the reranking parser, on the test set, with results shown in Table 5.
Experiments of Parsing
A possible reason is that most of non-perfect parses can provide useful syntactic structure information for building parsing models .
Our Two-Step Solution
After grammar formalism conversion, the problem now we face has been limited to how to build parsing models on multiple homogeneous treebank.
parsing models is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: