Index of papers in Proc. ACL 2008 that mention
  • shared task
Nivre, Joakim and McDonald, Ryan
Abstract
By letting one model generate features for the other, we consistently improve accuracy for both models, resulting in a significant improvement of the state of the art when evaluated on data sets from the CoNLL-X shared task .
Experiments
In this section, we present an experimental evaluation of the two guided models based on data from the CoNLL—X shared task , followed by a comparative error analysis including both the base models and the guided models.
Experiments
The data for the experiments are training and test sets for all thirteen languages from the CoNLL-X shared task on multilingual dependency parsing with training sets ranging in size from from 29,000 tokens (Slovene) to 1,249,000 tokens (Czech).
Experiments
Models are evaluated by their labeled attachment score (LAS) on the test set, i.e., the percentage of tokens that are assigned both the correct head and the correct label, using the evaluation software from the CoNLL-X shared task with default settings.4 Statistical significance was assessed using Dan Bikel’s randomized parsing evaluation comparator with the default setting of 10,000 iterations.5
Introduction
Both models have been used to achieve state-of-the-art accuracy for a wide range of languages, as shown in the CoNLL shared tasks on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007), but McDonald and Nivre (2007) showed that a detailed error analysis reveals important differences in the distribution of errors associated with the two models.
Related Work
(2007) to combine six transition-based parsers in the best performing system in the CoNLL 2007 shared task .
shared task is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Zapirain, Beñat and Agirre, Eneko and Màrquez, Llu'is
Experimental Setting 3.1 Datasets
The data used in this work is the benchmark corpus provided by the SRL shared task of CoNLL-2005 (Carreras and Marquez, 2005).
Experimental Setting 3.1 Datasets
The system achieves very good performance in the CoNLL-2005 shared task dataset and in the SRL subtask of the SemEval-2007 English lexical sample task (Zapirain et al., 2007).
On the Generalization of Role Sets
This is the setting used in the CoNLL 2005 shared task (Carreras and Marquez, 2005).
shared task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: