Index of papers in Proc. ACL 2014 that mention
  • parsing model
Ma, Xuezhe and Xia, Fei
Abstract
We train probabilistic parsing models for resource-poor languages by transferring cross-lingual knowledge from resource-rich language with entropy regularization.
Introduction
We train probabilistic parsing models for resource-poor languages by maximizing a combination of likelihood on parallel data and confidence on unlabeled data.
Our Approach
Central to our approach is a maximizing likelihood learning framework, in which we use an English parser and parallel text to estimate the “transferring distribution” of the target language parsing model (See Section 2.2 for more details).
Our Approach
2.1 Edge-Factored Parsing Model
Our Approach
A common strategy to make this parsing model efficiently computable is to factor dependency trees into sets of edges:
parsing model is mentioned in 21 sentences in this paper.
Topics mentioned in this paper:
Wang, Zhiguo and Xue, Nianwen
Abstract
Third, to enhance the power of parsing models , we enlarge the feature set with nonlocal features and semi-supervised word cluster features.
Introduction
This creates a chicken and egg problem that needs to be addressed when designing a parsing model .
Introduction
Second, due to the existence of unary rules in constituent trees, competing candidate parses often have different number of actions, and this increases the disambiguation difficulty for the parsing model .
Introduction
With this strategy, parser states and their unary extensions are put into the same beam, therefore the parsing model could decide whether or not to use unary actions within local decision beams.
Joint POS Tagging and Parsing with Nonlocal Features
To address the drawbacks of the standard transition-based constituent parsing model (described in Section 1), we propose a model to jointly solve POS tagging and constituent parsing with nonlocal features.
Joint POS Tagging and Parsing with Nonlocal Features
This makes the lengths of complete action sequences very different, and the parsing model has to disambiguate among terminal states with varying action sizes.
Joint POS Tagging and Parsing with Nonlocal Features
We find that our new method aligns states with their ru—x extensions in the same beam, therefore the parsing model could make decisions on whether using ru—x actions or not within local decision
Related Work
Finally, we enhanced our parsing model by enlarging the feature set with nonlocal features and semi-supervised word cluster features.
Transition-based Constituent Parsing
This section describes the transition-based constituent parsing model , which is the basis of Section 3 and the baseline model in Section 4.
Transition-based Constituent Parsing
2.1 Transition-based Constituent Parsing Model
Transition-based Constituent Parsing
A transition-based constituent parsing model is a quadruple C = (S, T, 30, St), where S is a set of parser states (sometimes called configurations), T is a finite set of actions, so is an initialization function to map each input sentence into a unique initial state, and St E S is a set of terminal states.
parsing model is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Zhang, Meishan and Zhang, Yue and Che, Wanxiang and Liu, Ting
Character-Level Dependency Tree
(2012) also use Zhang and Clark (2010)’s features, the arc-standard and arc-eager character-level dependency parsing models have the same features for joint word segmentation and PCS-tagging.
Character-Level Dependency Tree
The first consists of a joint segmentation and PCS-tagging model (Zhang and Clark, 2010) and a word-based dependency parsing model using the arc-standard algorithm (Huang et al., 2009).
Character-Level Dependency Tree
The second consists of the same joint segmentation and PCS-tagging model and a word-based dependency parsing model using the arc-eager algorithm
Introduction
For direct comparison with word-based parsers, we incorporate the traditional word segmentation, POS-tagging and dependency parsing stages in our joint parsing models .
Introduction
Experimental results show that the character-level dependency parsing models outperform the word-based methods on all the data sets.
parsing model is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Krishnamurthy, Jayant and Mitchell, Tom M.
Experiments
prove comparability, we reimplemented this approach using our parsing model , which has richer features than were used in their paper.
Parser Design
This section describes the Combinatory Categorial Grammar (CCG) parsing model used by ASP.
Parser Design
The parser uses category and relation predicates from a broad coverage knowledge base both to construct logical forms and to parametrize the parsing model .
Prior Work
The parsing model in this paper is loosely based on C&C (Clark and Curran, 2007b; Clark and Curran, 2007a), a discriminative log-linear model for statistical parsing.
parsing model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Lei, Tao and Xin, Yu and Zhang, Yuan and Barzilay, Regina and Jaakkola, Tommi
Experimental Setup
For our parser, we train both a first-order parsing model (as described in Section 3 and 4) as well as a third-order model.
Problem Formulation
We expect a dependency parsing model to benefit from several aspects of the low-rank tensor scoring.
Problem Formulation
Combined Scoring Our parsing model aims to combine the strengths of both traditional features from the MST/Turbo parser as well as the new low-rank tensor features.
parsing model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Li, Zhenghua and Zhang, Min and Chen, Wenliang
Experiments and Analysis
(2013) adopt the higher-order parsing model of Carreras (2007), and Suzuki et al.
Introduction
The reason may be that dependency parsing models are prone to amplify previous mistakes during training on self-parsed unlabeled data.
Supervised Dependency Parsing
We adopt the second-order graph-based dependency parsing model of McDonald and Pereira (2006) as our core parser, which incorporates features from the two kinds of subtrees in Fig.
parsing model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: