Index of papers in Proc. ACL 2014 that mention
  • manually annotated
Prabhakaran, Vinodkumar and Rambow, Owen
Data
We excluded a small subset of 419 threads that was used for previous manual annotation efforts, part of which was also used to train the DA and GDP taggers (Section 5) that generate features for our system.
Motivation
Another limitation of (Prabhakaran and Rambow, 2013) is that we used manual annotations for many of our features such as dialog acts and overt displays of power.
Motivation
Relying on manual annotations for features limited our analysis to a small subset of the Enron corpus, which has only 18 instances of hierarchical power.
Motivation
Like (Prabhakaran and Rambow, 2013), we use features to capture the dialog structure, but we use automatic taggers to generate them and assume no manual annotation at all at training or test time.
Structural Analysis
DIAPR: In (Prabhakaran and Rambow, 2013), we used dialog features derived from manual annotations — dialog acts (DA) and overt displays of power (ODP) — to model the structure of interactions within the message content.
Structural Analysis
In this work, we obtain DA and GDP tags on the entire corpus using automatic taggers trained on those manual annotations .
manually annotated is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Tang, Duyu and Wei, Furu and Yang, Nan and Zhou, Ming and Liu, Ting and Qin, Bing
Introduction
(2002) and employ machine learning algorithms to build classifiers from tweets with manually annotated sentiment polarity.
Introduction
We learn the sentiment-specific word embedding from tweets, leveraging massive tweets with emoticons as distant- supervised corpora without any manual annotations .
Introduction
0 We develop three neural networks to learn sentiment-specific word embedding (SSWE) from massive distant-supervised tweets without any manual annotations ;
Related Work
The sentiment classifier is built from tweets with manually annotated sentiment polarity.
Related Work
The reason is that RAE and NBSVM learn the representation of tweets from the small-scale manually annotated training set, which cannot well capture the comprehensive linguistic phenomenons of words.
manually annotated is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Daxenberger, Johannes and Gurevych, Iryna
Abstract
We manually annotated a corpus of 636 corresponding and non-corresponding edit-turn-pairs.
Conclusion
To test this system, we manually annotated a corpus of corresponding and non-corresponding edit-turn-pairs.
Conclusion
With regard to future work, an extension of the manually annotated corpus is the most important issue.
Corpus
To assess the reliability of these annotations, one of the coauthors manually annotated a random subset of 100 edit-tum-pairs contained in ETP-gold as corresponding or non-corresponding.
manually annotated is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Severyn, Aliaksei and Moschitti, Alessandro and Uryupina, Olga and Plank, Barbara and Filippova, Katja
Abstract
An extensive empirical evaluation on our manually annotated YouTube comments corpus shows a high classification accuracy and highlights the benefits of structural models in a cross-domain setting.
Experiments
This is an important advantage of our structural approach, since we cannot realistically expect to obtain manual annotations for 10k+ comments for each (of many thousands) product domains present on YouTube.
Introduction
The second contribution of the paper is the creation and annotation (by an expert coder) of a comment corpus containing 35k manually labeled comments for two product YouTube domains: tablets and automobiles.1 It is the first manually annotated corpus that enables researchers to use supervised methods on YouTube for comment classification and opinion analysis.
YouTube comments corpus
For each video, we extracted all available comments (limited to maximum lk comments per video) and manually annotated each comment with its type and polarity.
manually annotated is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Bansal, Mohit and Burkett, David and de Melo, Gerard and Klein, Dan
Introduction
Our model is also the first to directly learn relational patterns as part of the process of training an end-to-end taxonomic induction system, rather than using patterns that were hand-selected or learned via pairwise classifiers on manually annotated co-occurrence patterns.
Related Work
Both of these systems use a process that starts by finding basic level terms (leaves of the final taxonomy tree, typically) and then using relational patterns (hand-selected ones in the case of Kozareva and Hovy (2010), and ones learned separately by a pairwise classifier on manually annotated co-occurrence patterns for Navigli and Velardi (2010), Navigli et al.
Related Work
Our model also automatically learns relational patterns as a part of the taxonomic training phase, instead of relying on handpicked rules or pairwise classifiers on manually annotated co-occurrence patterns, and it is the first end-to-end (i.e., non-incremental) system to include heterogeneous relational information via sibling (e.g., coordination) patterns.
manually annotated is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Chen, Yanping and Zheng, Qinghua and Zhang, Wei
Feature Construction
Head Noun: The head noun (or head mention) of entity mention is manually annotated .
Feature Construction
Third, the entity mentions are manually annotated .
Related Work
Disadvantages of the TRE systems are that the manually annotated corpus is required, which is time-consuming and costly in human labor.
manually annotated is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Dong, Li and Wei, Furu and Tan, Chuanqi and Tang, Duyu and Zhou, Ming and Xu, Ke
Abstract
Furthermore, we introduce a manually annotated dataset for target-dependent Twitter sentiment analysis.
Experiments
After obtaining the tweets, we manually annotate the sentiment labels (negative, neutral, positive) for these targets.
Introduction
In addition, we introduce a manually annotated dataset, and conduct extensive experiments on it.
manually annotated is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Hashimoto, Chikara and Torisawa, Kentaro and Kloetzer, Julien and Sano, Motoki and Varga, István and Oh, Jong-Hoon and Kidawara, Yutaka
Event Causality Extraction Method
We acquired 43,697 excitation templates by Hashimoto et al.’s method and the manual annotation of excitation template candidates.5 We applied the excitation filter to all 272,025,401 event causality candidates from the web and 132,528,706 remained.
Experiments
Note that some event causality candidates were not given excitation values for their templates, since some templates were acquired by manual annotation without Hashimoto et al.’s method.
Introduction
To make event causality self-contained, we wrote guidelines for manually annotating train-ing/development/test data.
manually annotated is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kim, Seokhwan and Banchs, Rafael E. and Li, Haizhou
Evaluation
All the recorded dialogs with the total length of 21 hours were manually transcribed, then these transcribed dialogs with 19,651 utterances were manually annotated with the following nine topic categories: Opening, Closing, Itinerary, Accommodation, Attraction, Food, Transportation, Shopping, and Other.
Evaluation
For the linear kernel baseline, we used the following features: n-gram words, previous system actions, and current user acts which were manually annotated .
Evaluation
All the evaluations were done in fivefold cross validation to the manual annotations with two different metrics: one is accuracy of the predicted topic label for every turn, and the other is precisiorflrecall/F-measure for each event of topic transition occurred either in the answer or the predicted result.
manually annotated is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhu, Xiaodan and Guo, Hongyu and Mohammad, Saif and Kiritchenko, Svetlana
Experiment setup
Data As described earlier, the Stanford Sentiment Treebank (Socher et al., 2013) has manually annotated , real-valued sentiment values for all phrases in parse trees.
Experiment setup
The phrases at all tree nodes were manually annotated with one of 25 sentiment values that uniformly span between the positive and negative poles.
Introduction
The recently available Stanford Sentiment Treebank (Socher et al., 2013) renders manually annotated , real-valued sentiment scores for all phrases in parse trees.
manually annotated is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: