Index of papers in Proc. ACL 2011 that mention
  • CoNLL
Rüd, Stefan and Ciaramita, Massimiliano and Müller, Jens and Schütze, Hinrich
Experimental data
|CoNLL trn|CoNLL tst|IEER|KDD-D|KDD-T
Experimental data
Table 2: Percentages of NEs in CoNLL , IEER, and KDD.
Experimental data
As training data for all models evaluated we used the CoNLL 2003 English NER dataset, a corpus of approximately 300,000 tokens of Reuters news from 1992 annotated with person, location, organization and miscellaneous NE labels (Sang and Meulder, 2003).
Experimental setup
We use BIO encoding as in the original CoNLL task (Sang and Meulder, 2003).
Related work
(2010) show that adapting from CoNLL to MUC-7 (Chinchor, 1998) data (thus between different newswire sources), the best unsupervised feature (Brown clusters) improves F1 from .68 to .79.
CoNLL is mentioned in 23 sentences in this paper.
Topics mentioned in this paper:
Lang, Joel and Lapata, Mirella
Abstract
Evaluation on the CoNLL 2008 benchmark dataset demonstrates that our method outperforms competitive unsupervised approaches by a wide margin.
Experimental Setup
Data For evaluation purposes, the system’s output was compared against the CoNLL 2008 shared task dataset (Surdeanu et al., 2008) which provides
Experimental Setup
Our implementation allocates up to N = 21 clusters2 for each verb, one for each of the 20 most frequent functions in the CoNLL dataset and a default cluster for all other functions.
Introduction
We test the effectiveness of our induction method on the CoNLL 2008 benchmark
Learning Setting
with the CoNLL 2008 benchmark dataset used for evaluation in our experiments.
Results
(The following numbers are derived from the CoNLL dataset4 in the auto/auto setting.)
CoNLL is mentioned in 6 sentences in this paper.
Topics mentioned in this paper: