Index of papers in Proc. ACL 2014 that mention
  • human annotator
Joshi, Aditya and Mishra, Abhijit and Senthamilselvan, Nivvedan and Bhattacharyya, Pushpak
Abstract
The effort required for a human annotator to detect sentiment is not uniform for all texts, irrespective of his/her expertise.
Abstract
As for training data, since any direct judgment of complexity by a human annotator is fraught with subjectivity, we rely on cognitive evidence from eye-tracking.
Abstract
We also study the correlation between a human annotator’s perception of complexity and a machine’s confidence in polarity determination.
Discussion
Our proposed metric measures complexity of sentiment annotation, as perceived by human annotators .
Introduction
The effort required by a human annotator to detect sentiment is not uniform for all texts.
human annotator is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Hingmire, Swapnil and Chakraborti, Sutanu
Experimental Evaluation
While labeling a topic, we show its 30 most probable words to the human annotator .
Related Work
Also a human annotator may discard or mislabel a polysemous word, which may affect the performance of a text classifier.
Related Work
In active learning, particular unlabeled documents or features are selected and queried to an oracle (e. g. human annotator ).
Topic Sprinkling in LDA
We then ask a human annotator to assign one or more class labels to the topics based on their most probable words.
human annotator is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhu, Xiaodan and Guo, Hongyu and Mohammad, Saif and Kiritchenko, Svetlana
Conclusions
This paper provides a comprehensive and quantitative study of the behavior of negators through a unified view of fitting human annotation .
Experimental results
When the depths are within 4, the RNTN performs very well and the ( human annotated ) prior sentiment of arguments used in PSTN does not bring additional improvement over RNTN.
Introduction
We then extend the models to be dependent on the negators and demonstrate that such a simple extension can significantly improve the performance of fitting to the human annotated data.
Semantics-enriched modeling
As we have discussed above, we will use the human annotated sentiment for the arguments, same as in the models discussed in Section 3.
human annotator is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Qian, Longhua and Hui, Haotian and Hu, Ya'nan and Zhou, Guodong and Zhu, Qiaoming
Abstract
Active learning (AL) has been proven effective to reduce human annotation efforts in NLP.
Abstract
machine translation, which make use of multilingual corpora to decrease human annotation efforts by selecting highly informative sentences for a newly added language in multilingual parallel corpora.
Abstract
For future work, on one hand, we plan to combine uncertainty sampling with diversity and informativeness measures; on the other hand, we intend to combine BAL with semi-supervised learning to further reduce human annotation efforts.
human annotator is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: