Abstract | We evaluate our method on two tasks: cross-domain part-of-speech tagging and cross-domain sentiment classification . |
Distribution Prediction | Bigram features capture negations more accurately than unigrams, and have been found to be useful for sentiment classification tasks. |
Distribution Prediction | As we go on to show in Section 6, this enables us to use the same distribution prediction method for both POS tagging and sentiment classification . |
Domain Adaptation | We consider two DA tasks: (a) cross-domain POS tagging (Section 4.1), and (b) cross-domain sentiment classification (Section 4.2). |
Domain Adaptation | 4.2 Cross-Domain Sentiment Classification |
Domain Adaptation | Unlike in POS tagging, where we must individually tag each word in a target domain test sentence, in sentiment classification we must classify the sentiment for the entire review. |
Introduction | For example, unsupervised cross-domain sentiment classification (Blitzer et al., 2007; Aue and Gamon, 2005) involves using sentiment-labeled user reviews from the source domain, and unlabeled reviews from both the source and the target domains to learn a sentiment classifier for the target domain. |
Introduction | Domain adaptation (DA) of sentiment classification becomes extremely challenging when the distributions of words in the source and the target domains are very different, because the features learnt from the source domain labeled reviews might not appear in the target domain reviews that must be classified. |
Introduction | 0 Using the learnt distribution prediction model, we propose a method to learn a cross-domain sentiment classifier . |
Related Work | Prior knowledge of the sentiment of words, such as sentiment lexicons, has been incorporated into cross-domain sentiment classification . |
Related Work | Although incorporation of prior sentiment knowledge is a promising technique to improve accuracy in cross-domain sentiment classification , it is complementary to our task of distribution prediction across domains. |
Abstract | We present a method that learns word embedding for Twitter sentiment classification in this paper. |
Abstract | Experiments on applying SSWE to a benchmark Twitter sentiment classification dataset in SemEval 2013 show that (1) the SSWE feature performs comparably with handcrafted features in the top-performed system; (2) the performance is further improved by concatenating SSWE with existing feature set. |
Introduction | Twitter sentiment classification has attracted increasing research interest in recent years (J iang et al., 2011; Hu et al., 2013). |
Introduction | (2013) build the top-performed system in the Twitter sentiment classification track of SemEval 2013 (Nakov et al., 2013), using diverse sentiment lexicons and a variety of handcrafted features. |
Introduction | For the task of sentiment classification , an effective feature leam-ing method is to compose the representation of a sentence (or document) from the representations of the words or phrases it contains (Socher et al., 2013b; Yessenalina and Cardie, 2011). |
Related Work | In this section, we present a brief review of the related work from two perspectives, Twitter sentiment classification and learning continuous representations for sentiment classification . |
Related Work | 2.1 Twitter Sentiment Classification |
Approach | We formulate the sentence-level sentiment classification task as a sequence labeling problem. |
Approach | In this work, we apply PR in the context of CRFs for sentence-level sentiment classification . |
Experiments | We experimented with two product review datasets for sentence-level sentiment classification : the Customer Review (CR) data (Hu and Liu, 2004)6 which contains 638 reviews of 14 products such as cameras and cell phones, and the Multi-domain Amazon (MD) data from the test set of Tackstro'm and McDonald (201 la) which contains 294 reivews from 5 different domains. |
Introduction | In this paper, we focus on the task of sentence-level sentiment classification in online reviews. |
Introduction | Semi-supervised techniques have been proposed for sentence-level sentiment classification (Tackstro'm and McDonald, 2011a; Qu et al., 2012). |
Introduction | In this paper, we propose a sentence-level sentiment classification method that can (1) incorporate rich discourse information at both local and global levels; (2) encode discourse knowledge as soft constraints during learning; (3) make use of unlabeled data to enhance learning. |
Related Work | In this paper, we focus on the study of sentence-level sentiment classification . |
Related Work | Compared to the existing work on semi-supervised learning for sentence-level sentiment classification (Tackstro'm and McDonald, 2011a; Tackstrom and McDonald, 2011b; Qu et al., 2012), our work does not rely on a large amount of coarse-grained (document-level) labeled data, instead, distant supervision mainly comes from linguistically-motivated constraints. |
Abstract | Our sentiment classification model achieves approximately 1% greater accuracy than a state-of—the-art approach based on elementary discourse units. |
Experiments | We train 15 sentiment classification models using all basic features and their combinations. |
Experiments | To this end, we train and compare sentiment classification models using three configurations. |
Experiments | This is important because to compare only the lexicons’ impact on sentiment classification , we need to avoid the effect of other factors, such as syntax, transition cues, and so on. |
Framework | Knowledge from this initial training set is not sufficient to build an accurate sentiment classification model or to generate a domain-specific sentiment lexicon. |
Framework | for training a CRF-based sentiment classification model. |
Introduction | In respect to sentiment classification , Pang et al. |
Introduction | (1) Instead of using sentences, ReNew uses segments as the basic units for sentiment classification . |
Introduction | Additionally, our sentiment classification model achieves approximately 1% greater accuracy than a state-of-the-art approach based on elementary discourse units (Lazaridou et al., 2013). |
Abstract | We propose Adaptive Recursive Neural Network (AdaRNN) for target-dependent Twitter sentiment classification . |
Conclusion | We propose Adaptive Recursive Neural Network (AdaRNN) for the target-dependent Twitter sentiment classification . |
Experiments | To the best of our knowledge, this is the largest target-dependent Twitter sentiment classification dataset which is annotated manually. |
Experiments | Table 1: Evaluation results on target-dependent Twitter sentiment classification dataset. |
Introduction | For target-dependent sentiment classification , the manual evaluation of Jiang et al. |
Introduction | The neural models use distributed representation (Hinton, 1986; Rumelhart et al., 1986; Bengio et al., 2003) to automatically learn features for target-dependent sentiment classification . |
Conclusion | Experiments also demonstrate that inclusion of new sentiment words benefits sentiment classification definitely. |
Experiment | In this section, we will conduct the following experiments: first, we will compare our method to several baselines, and perform parameter tuning with extensive experiments; second, we will classify polarity of new sentiment words using two methods; third, we will demonstrate how new sentiment words will benefit sentiment classification . |
Experiment | 4.6 Application of New Sentiment Words to Sentiment Classification |
Experiment | In this section, we justify whether inclusion of new sentiment word would benefit sentiment classification . |
Introduction | ° We investigate the problem of polarity prediction of new sentiment word and demonstrate that inclusion of new sentiment word benefits sentiment classification tasks. |
Experiments | Sentiment classification . |
Experiments | (a) Sentiment classification |
Experiments | (a) Sentiment classification |
Representations and models | This would strongly bias the FVEC sentiment classifier to assign a positive label to the comment. |