Index of papers in Proc. ACL 2009 that mention
  • CoNLL
Lin, Dekang and Wu, Xiaoyun
Abstract
Our NER system achieves the best current result on the widely used CoNLL benchmark.
Conclusions
Our system achieved the best current result on the CoNLL NER data set.
Introduction
Our named entity recognition system achieves an Fl-score of 90.90 on the CoNLL 2003 English data set, which is about 1 point higher than the previous best result.
Named Entity Recognition
The CoNLL 2003 Shared Task (Tjong Kim Sang and Meulder 2003) offered a standard experimental platform for NER.
Named Entity Recognition
The CoNLL data set consists of news articles from Reutersl.
Named Entity Recognition
We adopted the same evaluation criteria as the CoNLL 2003 Shared Task.
CoNLL is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Zhang, Yi and Wang, Rui
Dependency Parsing with HPSG
For these rules, we refer to the conversion of the Penn Treebank into dependency structures used in the CoNLL 2008 Shared Task, and mark the heads of these rules in a way that will arrive at a compatible dependency backbone.
Dependency Parsing with HPSG
In combination with the right-branching analysis of coordination in ERG, this leads to the same dependency attachment in the CoNLL syntax.
Dependency Parsing with HPSG
the CoNLL shared task dependency structures, minor systematic differences still exist for some phenomena.
Experiment Results & Error Analyses
To evaluate the performance of our different dependency parsing models, we tested our approaches on several dependency treebanks for English in a similar spirit to the CoNLL 2006-2008 Shared Tasks.
Experiment Results & Error Analyses
In previous years of CoNLL Shared Tasks, several datasets have been created for the purpose of dependency parser evaluation.
Experiment Results & Error Analyses
Our experiments adhere to the CoNLL 2008 dependency syntax (Yamada et al.
Introduction
In the meantime, successful continuation of CoNLL Shared Tasks since 2006 (Buchholz and Marsi, 2006; Nivre et al., 2007a; Surdeanu et al., 2008) have witnessed how easy it has become to train a statistical syntactic dependency parser provided that there is annotated treebank.
Parser Domain Adaptation
In recent years, two statistical dependency parsing systems, MaltParser (Nivre et al., 2007b) and MS TParser (McDonald et al., 2005b), representing different threads of research in data-driven machine learning approaches have obtained high publicity, for their state-of-the-art performances in open competitions such as CoNLL Shared Tasks.
CoNLL is mentioned in 20 sentences in this paper.
Topics mentioned in this paper:
Huang, Fei and Yates, Alexander
Experiments
Following the CoNLL shared task from 2000, we use sections 15-18 of the Penn Treebank for our labeled training data for the supervised sequence labeler in all experiments (Tjong et al., 2000).
Experiments
We tested the accuracy of our models for chunking and POS tagging on section 20 of the Penn Treebank, which corresponds to the test set from the CoNLL 2000 task.
Experiments
The chunker’s accuracy is roughly in the middle of the range of results for the original CoNLL 2000 shared task (Tjong et al., 2000) .
Related Work
Ando and Zhang develop a semi-supervised chunker that outperforms purely supervised approaches on the CoNLL 2000 dataset (Ando and Zhang, 2005).
CoNLL is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Ganchev, Kuzman and Gillenwater, Jennifer and Taskar, Ben
Abstract
We evaluate our approach on Bulgarian and Spanish CoNLL shared task data and show that we consistently outperform unsupervised methods and can outperform supervised learning for limited training data.
Experiments
gtreebank corpus from CoNLL X.
Experiments
Figure 2: Learning curve of the discriminative no-rules transfer model on Bulgarian bitext, testing on CoNLL train sentences of up to 10 words.
Introduction
We evaluate our results on the Bulgarian and Spanish corpora from the CoNLL X shared task.
CoNLL is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Abend, Omri and Reichart, Roi and Rappoport, Ari
Related Work
PB is a standard corpus for SRL evaluation and was used in the CoNLL SRL shared tasks of 2004 (Carreras and Marquez, 2004) and 2005 (Carreras and Marquez, 2005).
Related Work
The CoNLL shared tasks of 2004 and 2005 were devoted to SRL, and studied the influence of different syntactic annotations and domain changes on SRL results.
Related Work
Supervised clause detection was also tackled as a separate task, notably in the CoNLL 2001 shared task (Tjong Kim Sang and Dejean, 2001).
CoNLL is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Tsuruoka, Yoshimasa and Tsujii, Jun'ichi and Ananiadou, Sophia
Log-Linear Models
The first set of experiments used the text chunking data set provided for the CoNLL 2000 shared task.5 The training data consists of 8,936 sentences in which each token is annotated with the “ICE” tags representing text chunks such as noun and verb phrases.
Log-Linear Models
Figure 3: CoNLL 2000 chunking task: Objective
Log-Linear Models
Figure 4: CoNLL 2000 chunking task: Number of active features.
CoNLL is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: