Index of papers in Proc. ACL 2010 that mention
  • manually annotated
Kim, Jungi and Li, Jin-Ji and Lee, Jong-Hyeok
Experiment
Three human annotators who are fluent in the two languages manually annotated N-to-N sentence alignments for each language pairs (KR-EN, KR-CH, KR-JP).
Experiment
Manual Annotation and Agreement Study
Experiment
To assess the performance of our subjectivity analysis systems, the Korean sentence chunks were manually annotated by two native speakers of Korean with Subjective and Objective labels (Table l).
Multilanguage-Comparability 3.1 Motivation
Evaluating with intensity is not easy for the latter approach; if test corpora already exist with intensity annotations for both languages, normalizing the intensity scores to a comparable scale is necessary (yet is uncertain unless every pair is checked manually), otherwise every pair of multilingual texts needs a manual annotation with its relative order of intensity.
Related Work
(2008) and Boiy and Moens (2009) have created manually annotated gold standards in target languages and studied various feature selection and learning techniques in machine learning approaches to analyze sentiments in multilingual web documents.
manually annotated is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Huang, Ruihong and Riloff, Ellen
Related Work
Nearly all semantic class taggers are trained using supervised learning with manually annotated data.
Related Work
Each annotator then labeled an additional 35 documents, which gave us a test set containing 100 manually annotated message board posts.
Related Work
With just fith of the training set, the system has about 1,600 message board posts to use for training, which yields a similar F score (roughly 61%) as the supervised baseline that used 100 manually annotated posts via 10-fold cross-validation.
manually annotated is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Li, Linlin and Roth, Benjamin and Sporleder, Caroline
Experimental Setup
This dataset consists of 3964 instances of 17 potential English idioms which were manually annotated as literal or nonliteral.
Experiments
.‘he reason is that although this system is claimed 0 be unsupervised, and it performs better than .11 the participating systems (including the super-'ised systems) in the SemEval-2007 shared task, it till needs to incorporate a lot of prior knowledge, pecifically information about co-occurrences be-ween different word senses, which was obtained rom a number of resources (SSI+LKB) includ-ng: (i) SemCor (manually annotated); (ii) LDC-)SO (partly manually annotated ); (iii) collocation lictionaries which are then disambiguated semi-.utomatically.
Experiments
Even though the system is not 'trained”, it needs a lot of information which is argely dependent on manually annotated data, so t does not fit neatly into the categories Type II or Type III either.
Introduction
One major factor that makes WSD difficult is a relative lack of manually annotated corpora, which hampers the performance of supervised systems.
manually annotated is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Cheung, Jackie Chi Kit and Penn, Gerald
Abstract
First, we show in a sentence ordering experiment that topological field information improves the entity grid model of Barzilay and Lapata (2008) more than grammatical role and simple clausal order information do, particularly when manual annotations of this information are not available.
Introduction
sentations to automatic extraction in the absence of manual annotations .
Introduction
Note, however, that the models based on automatic topological field annotations outperform even the grammatical role-based models using manual annotation (at marginal significance, p < 0.1).
manually annotated is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Feng, Yansong and Lapata, Mirella
Introduction
Obtaining training data in this setting does not require expensive manual annotation as many articles are published together with captioned images.
Related Work
The image parser is trained on a corpus, manually annotated with graphs representing image structure.
Related Work
Instead of relying on manual annotation or background ontological information we exploit a multimodal database of news articles, images, and their captions.
manually annotated is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kummerfeld, Jonathan K. and Roesner, Jessika and Dawborn, Tim and Haggerty, James and Curran, James R. and Clark, Stephen
Conclusion
The result is an accurate and efficient wide-coverage CCG parser that can be easily adapted for NLP applications in new domains without manually annotating data.
Data
For supertagger evaluation, one thousand sentences were manually annotated with CCG lexical categories and POS tags.
Data
For parser evaluation, three hundred of these sentences were manually annotated with DepBank grammatical relations (King et al., 2003) in the style of Briscoe and Carroll (2006).
manually annotated is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: