Dependency Parsing with HPSG | For these rules, we refer to the conversion of the Penn Treebank into dependency structures used in the CoNLL 2008 Shared Task , and mark the heads of these rules in a way that will arrive at a compatible dependency backbone. |
Dependency Parsing with HPSG | the CoNLL shared task dependency structures, minor systematic differences still exist for some phenomena. |
Experiment Results & Error Analyses | To evaluate the performance of our different dependency parsing models, we tested our approaches on several dependency treebanks for English in a similar spirit to the CoNLL 2006-2008 Shared Tasks . |
Experiment Results & Error Analyses | In previous years of CoNLL Shared Tasks , several datasets have been created for the purpose of dependency parser evaluation. |
Experiment Results & Error Analyses | The same dataset has been used for the domain adaptation track of the CoNLL 2007 Shared Task . |
Introduction | In the meantime, successful continuation of CoNLL Shared Tasks since 2006 (Buchholz and Marsi, 2006; Nivre et al., 2007a; Surdeanu et al., 2008) have witnessed how easy it has become to train a statistical syntactic dependency parser provided that there is annotated treebank. |
Parser Domain Adaptation | In recent years, two statistical dependency parsing systems, MaltParser (Nivre et al., 2007b) and MS TParser (McDonald et al., 2005b), representing different threads of research in data-driven machine learning approaches have obtained high publicity, for their state-of-the-art performances in open competitions such as CoNLL Shared Tasks . |
Related Work | PB is a standard corpus for SRL evaluation and was used in the CoNLL SRL shared tasks of 2004 (Carreras and Marquez, 2004) and 2005 (Carreras and Marquez, 2005). |
Related Work | The CoNLL shared tasks of 2004 and 2005 were devoted to SRL, and studied the influence of different syntactic annotations and domain changes on SRL results. |
Related Work | Supervised clause detection was also tackled as a separate task, notably in the CoNLL 2001 shared task (Tjong Kim Sang and Dejean, 2001). |
Experiments | Following the CoNLL shared task from 2000, we use sections 15-18 of the Penn Treebank for our labeled training data for the supervised sequence labeler in all experiments (Tjong et al., 2000). |
Experiments | The chunker’s accuracy is roughly in the middle of the range of results for the original CoNLL 2000 shared task (Tjong et al., 2000) . |
Experiments | For our experiment on domain adaptation, we focus on NP chunking and POS tagging, and we use the labeled training data from the CoNLL 2000 shared task as before. |