Index of papers in Proc. ACL 2012 that mention
  • parsing model
Chen, Wenliang and Zhang, Min and Li, Haizhou
Abstract
Most previous graph-based parsing models increase decoding complexity when they use high-order features due to exact-inference decoding.
Abstract
In this paper, we present an approach to enriching high—order feature representations for graph-based dependency parsing models using a dependency language model and beam search.
Abstract
Based on the dependency language model, we represent a set of features for the parsing model .
Dependency language model
In this paper, we use a linear model to calculate the scores for the parsing models (defined in Section 3.1).
Introduction
Among them, graph-based dependency parsing models have achieved state-of-the-art performance for a wide range of Ian-guages as shown in recent CoNLL shared tasks
Introduction
The parsing model searches for the final dependency trees by considering the original scores and the scores of DLM.
Introduction
The DLM-based features can capture the N-gram information of the parent-children structures for the parsing model .
parsing model is mentioned in 19 sentences in this paper.
Topics mentioned in this paper:
Kolomiyets, Oleksandr and Bethard, Steven and Moens, Marie-Francine
Abstract
We compare two parsing models for temporal dependency structures, and show that a deterministic non-projective dependency parser outperforms a graph-based maximum spanning tree parser, achieving labeled attachment accuracy of 0.647 and labeled tree edit distance of 0.596.
Corpus Annotation
train a temporal dependency parsing model .
Evaluations
To evaluate the parsing models (SRP and MST) we proposed two baselines.
Evaluations
In terms of labeled attachment score, both dependency parsing models outperformed the baseline models — the maximum spanning tree parser achieved 0.614 LAS, and the shift-reduce parser achieved 0.647 LAS.
Evaluations
These results indicate that dependency parsing models are a good fit to our whole-story timeline extraction task.
Feature Design
The full set of features proposed for both parsing models , derived from the state-of-the-art systems for temporal relation labeling, is presented in Table 2.
Parsing Models
Formally, a parsing model is a function (W —> H) where W = 701702 .
Parsing Models
4.1 Shift-Reduce Parsing Model
Parsing Models
4.2 Graph-Based Parsing Model
parsing model is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Li, Zhenghua and Liu, Ting and Che, Wanxiang
Abstract
Based on such TPs, we design quasi-synchronous grammar features to augment the baseline parsing models .
Dependency Parsing
In the current research, we adopt the graph—based parsing models for their state—of—the—art performance in a variety of languages.3 Graph—based models view the problem as finding the highest scoring tree from a directed graph.
Dependency Parsing
We implement three parsing models of varying strengths in capturing features to better understand the effect of the proposed QG features.
Dependency Parsing
parsing models (Yamada and Matsumoto, 2003; Nivre, 2003) with minor modifications.
Dependency Parsing with QG Features
Figure 4 presents the three kinds of TPs used in our model, which correspond to the three scoring parts of our parsing models .
Dependency Parsing with QG Features
Based on these TPs, we propose the QG features for enhancing the baseline parsing models , which are shown in Table 2.
Dependency Parsing with QG Features
The type of the TP is conjoined with the related words and POS tags, such that the QG—enhanced parsing models can make more elaborate decisions based on the context.
Introduction
Therefore, studies have recently resorted to other resources for the enhancement of parsing models , such as large—scale unlabeled data (Koo et al., 2008; Chen et al., 2009; Bansal and Klein, 2011; Zhou et al., 2011), and bilingual texts or cross—lingual treebanks (Burkett and Klein, 2008; Huang et al., 2009; Burkett et al., 2010; Chen et al., 2010).
Related Work
enhanced parsing models to softly learn the systematic inconsistencies based on QG features, making our approach simpler and more robust.
Related Work
Our approach is also intuitively related to stacked learning (SL), a machine learning framework that has recently been applied to dependency parsing to integrate two mainstream parsing models , i.e., graph—based and transition—based models (Nivre and McDonald, 2008; Martins et al., 2008).
parsing model is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Chen, Xiao and Kit, Chunyu
Experiment
Our parsing models are evaluated on both English and Chinese treebanks, i.e., the WSJ section of Penn Treebank 3.0 (LDC99T42) and the Chinese Treebank 5.1 (LDC2005T01U01).
Experiment
The parameters 6 of each parsing model are estimated from a training set using an averaged perceptron algorithm, following Collins (2002) and Huang (2008).
Experiment
The performance of our first— and higher-order parsing models on all sentences of the two test sets is presented in Table 3, where A indicates a tuned balance factor.
Higher-order Constituent Parsing
The first feature ¢0(Q(r), s) is calculated with a PCFG-based generative parsing model (Petrov and Klein, 2007), as defined in (4) below, where 7“ is the grammar rule instance A —> B C that covers the span from the b-th
Higher-order Constituent Parsing
With only lexical features in a part, this parsing model backs off to a first-order one similar to those in the previous works.
Higher-order Constituent Parsing
Adding structural features, each involving a least a neighboring rule instance, makes it a higher-order parsing model .
Introduction
Previous discriminative parsing models usually factor a parse tree into a set of parts.
Introduction
Then, the previous discriminative constituent parsing models (Johnson, 2001; Henderson, 2004; Taskar et al., 2004; Petrov and Klein, 2008a;
parsing model is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Lippincott, Thomas and Korhonen, Anna and Ó Séaghdha, Diarmuid
Conclusions and future work
Second, simply treating POS tags within a small window of the verb as pseudo-GRs produces state-of-the-art results without the need for a parsing model .
Introduction
However, the treebanks necessary for training a high-accuracy parsing model are expensive to build for new domains.
Previous work
These typically rely on language-specific knowledge, either directly through heuristics, or indirectly through parsing models trained on treebanks.
Previous work
Note that both methods require extensive manual work: the Preiss system involves the a priori definition of the SCF inventory, careful construction of matching rules, and an unlexicalized parsing model .
Previous work
The BioLexicon system induces its SCF inventory automatically, but requires a lexicalized parsing model , rendering it more sensitive to domain variation.
Results
Since POS tagging is more reliable and robust across domains than parsing, retraining on new domains will not suffer the effects of a mismatched parsing model (Lippincott et al., 2010).
parsing model is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Hatori, Jun and Matsuzaki, Takuya and Miyao, Yusuke and Tsujii, Jun'ichi
Abstract
Based on an extension of the incremental joint model for POS tagging and dependency parsing (Hatori et al., 2011), we propose an efficient character-based decoding method that can combine features from state-of-the-art segmentation, POS tagging, and dependency parsing models .
Model
Based on the joint POS tagging and dependency parsing model by Hatori et al.
Model
0 TagDep: the joint POS tagging and dependency parsing model (Hatori et al., 2011), where the lookahead features are omitted.5
Related Works
Therefore, we place no restriction on the segmentation possibilities to consider, and we assess the full potential of the joint segmentation and dependency parsing model .
Related Works
The incremental framework of our model is based on the joint POS tagging and dependency parsing model for Chinese (Hatori et al., 2011), which is an extension of the shift-reduce dependency parser with dynamic programming (Huang and Sagae, 2010).
parsing model is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Sun, Weiwei and Uszkoreit, Hans
Capturing Syntagmatic Relations via Constituency Parsing
We can see that the Bagging model taking both sequential tagging and chart parsing models as basic systems outperform the baseline systems and the Bagging model taking either model in isolation as basic systems.
Capturing Syntagmatic Relations via Constituency Parsing
interesting phenomenon is that the Bagging method can also improve the parsing model , but there is a decrease while only combining taggers.
Introduction
and a (syntax-based) chart parsing model .
parsing model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: