Index of papers in Proc. ACL 2011 that mention
  • dependency parsing
Bansal, Mohit and Klein, Dan
Abstract
We show relative error reductions of 7.0% over the second-order dependency parser of McDonald and Pereira (2006), 9.2% over the constituent parser of Petrov et al.
Analysis
Results are for dependency parsing on the dev set for iters:5,training-k:1.
Introduction
For dependency parsing , we augment the features in the second-order parser of McDonald and Pereira (2006).
Introduction
(2008) smooth the sparseness of lexical features in a discriminative dependency parser by using cluster-based word-senses as intermediate abstractions in
Introduction
For the dependency case, we can integrate them into the dynamic programming of a base parser; we use the discriminatively-trained MST dependency parser (McDonald et al., 2005; McDonald and Pereira, 2006).
Parsing Experiments
We first integrate our features into a dependency parser , where the integration is more natural and pushes all the way into the underlying dynamic program.
Parsing Experiments
4.1 Dependency Parsing
Parsing Experiments
For dependency parsing , we use the discriminatively-trained MSTParser4, an implementation of first and second order MST parsing models of McDonald et a1.
Web-count Features
pairs, as is standard in the dependency parsing literature (see Figure 3).
dependency parsing is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Lee, John and Naradowsky, Jason and Smith, David A.
Abstract
Most previous studies of morphological disambiguation and dependency parsing have been pursued independently.
Baselines
For dependency parsing , our baseline is a “pipeline” parser (§4.2) that infers syntax upon the output of the baseline tagger.
Baselines
4.2 Baseline Dependency Parser
Experimental Results
We compare the performance of the pipeline model (§4) and the joint model (§3) on morphological disambiguation and unlabeled dependency parsing .
Experimental Results
6.2 Dependency Parsing
Introduction
To date, studies of morphological analysis and dependency parsing have been pursued more or less independently.
Introduction
Morphological taggers disambiguate morphological attributes such as part-of-speech (POS) or case, without taking syntax nfioaummn(Hflimme7azd,2mm;Ihficet al., 2001); dependency parsers commonly assume the “pipeline” approach, relying on morphological information as part of the input (Buchholz and Marsi, 2006; Nivre et al., 2007).
Introduction
97% (Toutanova et al., 2003), and that of dependency parsing has reached the low nineties (Nivre et al., 2007).
Previous Work
We know of only one previous attempt in data-driven dependency parsing for Latin (Bamman and Crane, 2008), with the goal of constructing a dynamic lexicon for a digital library.
Previous Work
Parsing is performed using the usual pipeline approach, first with the TreeTagger analyzer (Schmid, 1994) and then with a state-of-the-art dependency parser (McDonald et al., 2005).
dependency parsing is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Zhou, Guangyou and Zhao, Jun and Liu, Kang and Cai, Li
Abstract
In this paper, we present a novel approach which incorporates the web-derived selectional preferences to improve statistical dependency parsing .
Abstract
Experiments show that web-scale data improves statistical dependency parsing , particularly for long dependency relationships.
Introduction
Dependency parsing is the task of building dependency links between words in a sentence, which has recently gained a wide interest in the natural language processing community.
Introduction
With the availability of large-scale annotated corpora such as Penn Treebank (Marcus et al., 1993), it is easy to train a high-performance dependency parser using supervised learning methods.
Introduction
However, current state-of—the-art statistical dependency parsers (McDonald et al., 2005; McDonald and Pereira, 2006; Hall et al., 2006) tend to have
dependency parsing is mentioned in 37 sentences in this paper.
Topics mentioned in this paper:
Wu, Xianchao and Matsuzaki, Takuya and Tsujii, Jun'ichi
Abstract
In order to constrain the exhaustive attachments of function words, we limit to bind them to the nearby syntactic chunks yielded by a target dependency parser .
Composed Rule Extraction
In the English-to-Japanese translation test case of the present study, the target chunk set is yielded by a state-of-the-art Japanese dependency parser , Cabocha v0.535 (Kudo and Matsumoto, 2002).
Conclusion
In order to avoid generating too large a derivation forest for a packed forest, we further used chunk-level information yielded by a target dependency parser .
Introduction
In order to constrain the exhaustive attachments of function words, we further limit the function words to bind to their surrounding chunks yielded by a dependency parser .
Related Research
Thus, we focus on the realignment of target function words to source tree fragments and use a dependency parser to limit the attachments of unaligned target words.
dependency parsing is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Branavan, S.R.K and Silver, David and Barzilay, Regina
Adding Linguistic Knowledge to the Monte-Carlo Framework
Given sentence 3/, and its dependency parse qi, we model the distribution over predicate labels (5;- as:
Adding Linguistic Knowledge to the Monte-Carlo Framework
The feature function 2; used for predicate labeling on the other hand operates only on a given sentence and its dependency parse .
Adding Linguistic Knowledge to the Monte-Carlo Framework
It computes features which are the Cartesian product of the candidate predicate label with word attributes such as type, part-of—speech tag, and dependency parse information.
dependency parsing is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Lang, Joel and Lapata, Mirella
Experimental Setup
In addition to gold standard dependency parses , the dataset also contains automatic parses obtained from the MaltParser (Nivre et al., 2007).
Learning Setting
Given a dependency parse of a sentence, our system identifies argument instances and assigns them to clusters.
Split-Merge Role Induction
Figure l: A sample dependency parse with dependency labels SBJ (subject), OBJ (object), NMOD (nominal modifier), OPRD (object predicative complement), PRD (predicative complement), and IM (infinitive marker).
dependency parsing is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Sun, Ang and Grishman, Ralph and Sekine, Satoshi
Experiments
Preprocessing of the ACE documents: We used the Stanford parser6 for syntactic and dependency parsing .
Experiments
(2008) for dependency parsing .
Introduction
Given an entity pair and a sentence containing the pair, both approaches usually start with multiple level analyses of the sentence such as tokenization, partial or full syntactic parsing, and dependency parsing .
dependency parsing is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhang, Hao and Fang, Licheng and Xu, Peng and Wu, Xiaoyun
Experiments
The default parser in the experiments is a shift-reduce dependency parser (Nivre and Scholz, 2004).
Experiments
We convert dependency parses to constituent trees by propagating the part-of-speech tags of the head words to the corresponding phrase structures.
Experiments
Our fast deterministic dependency parser does not generate a packed forest.
dependency parsing is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: