Index of papers in Proc. ACL 2012 that mention
  • manually annotated
Liu, Xiaohua and Zhou, Ming and Zhou, Xiangyang and Fu, Zhongyang and Wei, Furu
Abstract
We evaluate our method on a manually annotated data set, and show that our method outperforms the baseline that handles these two tasks separately, boosting the F1 from 80.2% to 83.6% for NER, and the Accuracy from 79.4% to 82.6% for NEN, respectively.
Conclusions and Future work
We evaluate our method on a manually annotated data set.
Experiments
We manually annotate a data set to evaluate our method.
Related Work
is trained on a manually annotated data set, which achieves an F1 of 81.48% on the test data set; Chiti-cariu et al.
manually annotated is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Branavan, S.R.K. and Kushman, Nate and Lei, Tao and Barzilay, Regina
Experimental Setup
We also manually annotated the relations expressed in the text, identifying 94 of the Candidate Relations as valid.
Experimental Setup
Evaluation Metrics We use our manual annotations to evaluate the type-level accuracy of relation extraction.
Experimental Setup
The first, Manual Text, is a variant of our model which directly uses the links derived from manual annotations of preconditions in text.
manually annotated is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kim, Seokhwan and Lee, Gary Geunbae
Evaluation
The experiments were performed on the manually annotated Korean test dataset.
Introduction
Several datasets that provide manual annotations of semantic relationships are available from MUC (Grishman and Sund-heim, 1996) and ACE (Doddington et al., 2004) projects, but these datasets contain labeled training examples in only a few major languages, including English, Chinese, and Arabic.
Introduction
Because manual annotation of semantic relations for such resource-poor languages is very expensive, we instead consider weakly supervised learning techniques (Riloff and Jones, 1999; Agichtein and Gravano, 2000; Zhang, 2004; Chen et al., 2006) to learn the relation extractors without significant annotation efforts.
manually annotated is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kim, Sungchul and Toutanova, Kristina and Yu, Hwanjo
Data and task
The approach uses a small amount of manually annotated article-pairs to train a document-level CRF model for parallel sentence extraction.
Data and task
Of these, we manually annotated 91 English-Bulgarian and 79 English-Korean sentence pairs with source and target named entities as well as word-alignment links among named entities in the two languages.
Data and task
At test time we use the local+global Wiki-based tagger to define the English entities and we don’t use the manually annotated alignments.
manually annotated is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Li, Fangtao and Pan, Sinno Jialin and Jin, Ou and Yang, Qiang and Zhu, Xiaoyan
Introduction
However, the performance of these methods highly relies on manually annotated training data.
Introduction
However, these methods need to manually annotate a lot of training data in each domain.
Introduction
The sentiment and topic words are manually annotated .
manually annotated is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: