Index of papers in Proc. ACL 2008 that mention
  • CoNLL
Zapirain, Beñat and Agirre, Eneko and Màrquez, Llu'is
Mapping into VerbNet Thematic Roles
PropBank to VerbNet (hand) 79.17 :|:0.9 81.77 72.50 VerbNet (SemEval setting) 78.61 :|:0.9 81.28 71.84 PropBank to VerbNet (MF) 77.15 :|:0.9 79.09 71.90 VerbNet (CoNLL setting) 76.99 :|:0.9 79.44 70.88 Test on Brown PropB ank to VerbNet (MF) 64.79 :|:1.0 68.93 55.94 VerbNet ( CoNLL setting) 62.87 :|:1.0 67.07 54.69
On the Generalization of Role Sets
Being aware that, in a real scenario, the sense information will not be available, we devised the second setting ( ‘CoNLL’ ), where the hand-annotated verb sense information was discarded.
On the Generalization of Role Sets
This is the setting used in the CoNLL 2005 shared task (Carreras and Marquez, 2005).
On the Generalization of Role Sets
In the second setting ( ‘CoNLL setting’ row in the same table) the PropBank classifier degrades slightly, but the difference is not statistically significant.
CoNLL is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Miyao, Yusuke and Saetre, Rune and Sagae, Kenji and Matsuzaki, Takuya and Tsujii, Jun'ichi
Evaluation Methodology
CoNLL The dependency tree format used in the 2006 and 2007 CoNLL shared tasks on dependency parsing.
Evaluation Methodology
KSDEP 1% CONLL RERANK NO—RERANK BERKELEY STANFORD ENJU ENJU—GENIA
Evaluation Methodology
Although the concept looks similar to CoNLL , this representa-
Experiments
CoNLL PTB HD SD PAS
Experiments
Dependency-based representations are competitive, while CoNLL seems superior to HD and SD in spite of the imperfect conversion from PTB to CoNLL .
Experiments
This might be a reason for the high performances of the dependency parsers that directly compute CoNLL dependencies.
Syntactic Parsers and Their Representations
The concept is therefore similar to CoNLL dependencies, though PAS expresses deeper relations, and may include reentrant structures.
CoNLL is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Vickrey, David and Koller, Daphne
Experiments
We evaluated our system using the setup of the Conll 2005 semantic role labeling task.2 Thus, we trained on Sections 2-21 of PropBank and used Section 24 as development data.
Experiments
We used the Char-niak parses provided by the Conll distribution.
Experiments
Our Transforms model takes as input the Char-niak parses supplied by the Conll release, and labels every node with Core arguments (ARGO—ARG5).
Introduction
Applying our combined simplificatiorVSRL model to the Conll 2005 task, we show a significant improvement over a strong baseline model.
Introduction
Our model outperforms all but the best few Conll 2005 systems, each of which uses multiple different automatically-generated parses (which would likely improve our model).
CoNLL is mentioned in 5 sentences in this paper.
Topics mentioned in this paper: