Index of papers in Proc. ACL 2013 that mention
  • human annotators
Kozareva, Zornitsa
Conclusion
From the two tasks, the valence prediction problem was more challenging both for the human annotators and the automated system.
Metaphors
To conduct our study, we use human annotators to collect metaphor-rich texts (Shutova and Teufel, 2010) and tag each metaphor with its corresponding polarity (Posi-tive/Negative) and valence [—3, +3] scores.
Task A: Polarity Classification
In our study, the source and target domains are provided by the human annotators who agree on these definitions, however the source and target can be also automatically generated by an interpretation system or a concept mapper.
Task B: Valence Prediction
Evaluation Measures: To evaluate the quality of the valence prediction model, we compare the actual valence score of the metaphor given by human annotators denoted with 3/ against those valence scores predicted by the regression model denoted with ac.
Task B: Valence Prediction
To conduct our valence prediction study, we used the same human annotators from the polarity classification task for each one of the English, Spanish, Russian and Farsi languages.
Task B: Valence Prediction
This means that the LIWC based valence regression model approximates the predicted values better to those of the human annotators .
human annotators is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Wang, Chenguang and Duan, Nan and Zhou, Ming and Zhang, Ming
Experiment
Human annotated data contains 0.3M synonym pairs from WordNet dictionary.
Paraphrasing for Web Search
Additionally, human annotated data can also be used as high-quality paraphrases.
Paraphrasing for Web Search
QZ- is the ith query and Dfabel C D is a subset of documents, in which the relevance between Qi and each document is labeled by human annotators .
Paraphrasing for Web Search
The relevance rating labeled by human annotators can be represented by five levels: “Perfect”, “Excellent”, “Good”, “Fair”, and “Bad”.
human annotators is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Ferschke, Oliver and Gurevych, Iryna and Rittberger, Marc
Data Selection and Corpus Creation
Table 2: Agreement of human annotator with gold standard
Data Selection and Corpus Creation
In order to test the reliability of these user assigned templates as quality flaw markers, we carried out an annotation study in which a human annotator was asked to perform the binary flaw detection task manually.
Data Selection and Corpus Creation
Table 2 lists the chance corrected agreement (Cohen’s K) along with the F1 performance of the human annotations against the gold standard corpus.
human annotators is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Ramteke, Ankit and Malu, Akshat and Bhattacharyya, Pushpak and Nath, J. Saketha
Building domain ontology
Some additional features get added by human annotator to increase the coverage of the ontology.
Building domain ontology
The abstract concept of storage is contributed by the human annotator through his/her world knowledge.
Building domain ontology
Step 2: The features thus obtained are arranged in the form of a hierarchy by a human annotator .
human annotators is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: