Index of papers in Proc. ACL 2014 that mention
  • classification task
Hermann, Karl Moritz and Blunsom, Phil
Abstract
We evaluate these models on two cross-lingual document classification tasks , outperforming the prior state of the art.
Corpora
The Europarl corpus v71 (Koehn, 2005) was used during initial development and testing of our approach, as well as to learn the representations used for the Cross-Lingual Document Classification task described in §5.2.
Experiments
First, we replicate the cross-lingual document classification task of Klementiev et al.
Experiments
multi-label classification task using the TED corpus, both for training and evaluating.
Experiments
Each document in the classification task is represented by the average of the d-dimensional representations of all its sentences.
classification task is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Yang, Bishan and Cardie, Claire
Approach
We formulate the sentence-level sentiment classification task as a sequence labeling problem.
Conclusion
Our experiments show that our model achieves better accuracy than existing supervised and semi-supervised models for the sentence-level sentiment classification task .
Experiments
For the three-way classification task on the MD dataset, we also implemented the following baselines: (4) VOTEFLIP: a rule-based algorithm that leverages the positive, negative and neutral cues along with the effect of negation to determine the sentence sentiment (Choi and Cardie, 2009).
Experiments
We first report results on a binary (positive or negative) sentence-level sentiment classification task .
Experiments
We also analyzed the model’s performance on a three-way sentiment classification task .
Introduction
We evaluate our approach on the sentence-level sentiment classification task using two standard product review datasets.
classification task is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Severyn, Aliaksei and Moschitti, Alessandro and Uryupina, Olga and Plank, Barbara and Filippova, Katja
Experiments
Similar to the opinion classification task , comment type classification is a multi-class classification with three classes: video, product and uninform.
Experiments
Table 1: Summary of YouTube comments data used in the sentiment, type and full classification tasks .
Experiments
We cast this problem as a single multi-class classification task with seven classes: the Cartesian product between {product , video} type labels and {posit ive, neutral , negative} sentiment labels plus the uninf ormat ive category (spam and ofl-topic).
Representations and models
Our structures are specifically adapted to the noisy user-generated texts and encode important aspects of the comments, e. g., words from the sentiment lexicons, product concepts and negation words, which specifically targets the sentiment and comment type classification tasks .
YouTube comments corpus
The resulting annotator agreement 04 value (Krip-pendorf, 2004; Artstein and Poesio, 2008) scores are 60.6 (AUTO), 72.1 (TABLETS) for the sentiment task and 64.1 (AUTO), 79.3 (TABLETS) for the type classification task .
classification task is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Gkatzia, Dimitra and Hastie, Helen and Lemon, Oliver
Introduction
We frame content selection as a simple classification task : given a set of time-series data, decide for each template whether it should be included in a summary or not.
Methodology
The LP method transforms the ML task, into one single-label multi-class classification task , where the possible set of predicted variables for the transformed class is the powerset of labels present in the original dataset.
Related Work
Collective content selection (Barzilay and Lapata, 2004) is similar to our proposed method in that it is a classification task that predicts the templates from the same instance simultaneously.
Related Work
Problem transformation approaches (Tsoumakas and Katakis, 2007) transform the ML classification task into one or more simple classification tasks .
classification task is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Yang, Min and Zhu, Dingju and Chow, Kam-Pui
Conclusions and Future Work
Experimental results showed that the emotional lexicons generated by our algorithm is of high quality, and can assist emotion classification task .
Experiments
Since there is no metric explicitly measuring the quality of an emotion lexicon, we demonstrate the performance of our algorithm in two ways: (1) we perform a case study for the lexicon generated by our algorithm, and (2) we compare the results of solving emotion classification task using our lexicon against different methods, and demonstrate the advantage of our lexicon over other lexicons and other emotion classification systems.
Experiments
We compare the performance between a popular emotion lexicon WordNet-Affect (Strapparava and Valitutti, 2004) and our approach for emotion classification task .
Experiments
In particular, we are able to obtain an overall Fl-score of 10.52% for disgust classification task which is difficult to work out using pre-
classification task is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Hingmire, Swapnil and Chakraborti, Sutanu
Experimental Evaluation
Following are the three classification tasks associated with this dataset.
Experimental Evaluation
For SRAA dataset we infer 8 topics on the training dataset and label these 8 topics for all the three classification tasks .
Experimental Evaluation
However, ClassifyLDA performs better than TS-LDA for the three classification tasks of SRAA dataset.
classification task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kang, Jun Seok and Feng, Song and Akoglu, Leman and Choi, Yejin
Pairwise Markov Random Fields and Loopy Belief Propagation
We formulate the task of learning sense- and word-level connotation lexicon as a graph-based classification task (Sen et al., 2008).
Pairwise Markov Random Fields and Loopy Belief Propagation
In this classification task , we denote by 3?
Pairwise Markov Random Fields and Loopy Belief Propagation
Problem Definition Having introduced our graph-based classification task and objective formulation, we define our problem more formally.
classification task is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: