Index of papers in Proc. ACL 2008 that mention
  • dependency parsing
Gómez-Rodr'iguez, Carlos and Carroll, John and Weir, David
Abstract
We define a new formalism, based on Sikkel’s parsing schemata for constituency parsers, that can be used to describe, analyze and compare dependency parsing algorithms.
Abstract
This abstraction allows us to establish clear relations between several existing projective dependency parsers and prove their correctness.
Dependency parsing schemata
However, parsing schemata are not directly applicable to dependency parsing , since their formal framework is based on constituency trees.
Dependency parsing schemata
In spite of this problem, many of the dependency parsers described in the literature are constructive, in the sense that they proceed by combining smaller structures to form larger ones until they find a complete parse for the input sentence.
Dependency parsing schemata
However, in order to define such a formalism we have to tackle some issues specific to dependency parsers:
Introduction
Dependency parsing consists of finding the structure of a sentence as expressed by a set of directed links (dependencies) between words.
Introduction
In addition to this, some dependency parsers are able to represent nonprojective structures, which is an important feature when parsing free word order languages in which discontinuous constituents are common.
Introduction
However, since parsing schemata are defined as deduction systems over sets of constituency trees, they cannot be used to describe dependency parsers .
dependency parsing is mentioned in 28 sentences in this paper.
Topics mentioned in this paper:
Koo, Terry and Carreras, Xavier and Collins, Michael
Abstract
We present a simple and effective semi-supervised method for training dependency parsers .
Abstract
We demonstrate the effectiveness of the approach in a series of dependency parsing experiments on the Penn Treebank and Prague Dependency Treebank, and we show that the cluster-based features yield substantial gains in performance across a wide range of conditions.
Background 2.1 Dependency parsing
Recent work (Buchholz and Marsi, 2006; Nivre et al., 2007) has focused on dependency parsing .
Background 2.1 Dependency parsing
Dependency parsing depends critically on predicting head-modifier relationships, which can be difficult due to the statistical sparsity of these word-to-word interactions.
Background 2.1 Dependency parsing
In this paper, we take a part-factored structured classification approach to dependency parsing .
Introduction
To demonstrate the effectiveness of our approach, we conduct experiments in dependency parsing, which has been the focus of much recent research—e.g., see work in the CoNLL shared tasks on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007).
Introduction
However, our target task of dependency parsing involves more complex structured relationships than named-entity tagging; moreover, it is not at all clear that word clusters should have any relevance to syntactic structure.
Introduction
Nevertheless, our experiments demonstrate that word clusters can be quite effective in dependency parsing applications.
dependency parsing is mentioned in 18 sentences in this paper.
Topics mentioned in this paper:
Miyao, Yusuke and Saetre, Rune and Sagae, Kenji and Matsuzaki, Takuya and Tsujii, Jun'ichi
Abstract
We evaluate eight parsers (based on dependency parsing , phrase structure parsing, or deep parsing) using five different parse representations.
Introduction
Parsing technologies have improved considerably in the past few years, and high-performance syntactic parsers are no longer limited to PCFG—based frameworks (Charniak, 2000; Klein and Manning, 2003; Charniak and Johnson, 2005 ; Petrov and Klein, 2007), but also include dependency parsers (McDonald and Pereira, 2006; Nivre and Nilsson, 2005; Sagae and Tsujii, 2007) and deep parsers (Kaplan et al., 2004; Clark and Curran, 2004; Miyao and Tsujii, 2008).
Introduction
In this paper, we present a comparative evaluation of syntactic parsers and their output representations based on different frameworks: dependency parsing , phrase structure parsing, and deep parsing.
Syntactic Parsers and Their Representations
This paper focuses on eight representative parsers that are classified into three parsing frameworks: dependency parsing , phrase structure parsing, and deep parsing.
Syntactic Parsers and Their Representations
2.1 Dependency parsing
Syntactic Parsers and Their Representations
Because the shared tasks of CoNLL-2006 and CoNLL-2007 focused on data-driven dependency parsing , it has recently been extensively studied in parsing research.
dependency parsing is mentioned in 23 sentences in this paper.
Topics mentioned in this paper:
Nivre, Joakim and McDonald, Ryan
Abstract
Previous studies of data-driven dependency parsing have shown that the distribution of parsing errors are correlated with theoretical properties of the models used for learning and inference.
Experiments
The data for the experiments are training and test sets for all thirteen languages from the CoNLL-X shared task on multilingual dependency parsing with training sets ranging in size from from 29,000 tokens (Slovene) to 1,249,000 tokens (Czech).
Experiments
The experimental results presented so far show that feature-based integration is a viable approach for improving the accuracy of both graph-based and transition-based models for dependency parsing , but they say very little about how the integration benefits
Introduction
This is undoubtedly one of the reasons for the emergence of dependency parsers for a wide range of languages.
Introduction
Practically all data-driven models that have been proposed for dependency parsing in recent years can be described as either graph-based or transition-based (McDonald and Nivre, 2007).
Introduction
Both models have been used to achieve state-of-the-art accuracy for a wide range of languages, as shown in the CoNLL shared tasks on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007), but McDonald and Nivre (2007) showed that a detailed error analysis reveals important differences in the distribution of errors associated with the two models.
Two Models for Dependency Parsing
This is a common constraint in many dependency parsing theories and their implementations.
Two Models for Dependency Parsing
Graph-based dependency parsers parameterize a model over smaller substructures in order to search the space of valid dependency graphs and produce the most likely one.
Two Models for Dependency Parsing
As a result, the dependency parsing problem is written:
dependency parsing is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Wang, Qin Iris and Schuurmans, Dale and Lin, Dekang
Abstract
We present a novel semi-supervised training algorithm for learning dependency parsers .
Abstract
To demonstrate the benefits of this approach, we apply the technique to learning dependency parsers from combined labeled and unlabeled corpora.
Dependency Parsing Model
This formulation is sufficiently general to capture most dependency parsing models, including probabilistic dependency models (Eisner, 1996; Wang et al., 2005) as well as non-probabilistic models (McDonald et al., 2005a).
Efficient Optimization Strategy
This procedure works efficiently on the task of training a dependency parser .
Experimental Results
algorithm for achieving a global optimum, we now investigate its effectiveness for dependency parsing .
Experimental Results
We applied the resulting algorithm to learn dependency parsers for both English and Chinese.
Introduction
Supervised learning algorithms still represent the state of the art approach for inferring dependency parsers from data (McDonald et al., 2005a; McDonald and Pereira, 2006; Wang et al., 2007).
Introduction
As we will demonstrate below, this approach admits an efficient training procedure that can find a global minimum, and, perhaps surprisingly, can systematically improve the accuracy of supervised training approaches for learning dependency parsers .
Introduction
ing semi-supervised convex objective to dependency parsing , and obtain significant improvement over the corresponding supervised structured SVM.
Supervised Structured Large Margin Training
This approach corresponds to the training problem posed in (McDonald et al., 2005a) and has yielded the best published results for English dependency parsing .
dependency parsing is mentioned in 20 sentences in this paper.
Topics mentioned in this paper: