Abstract | This paper presents a method for multimodal sentiment classification , which can identify the sentiment expressed in utterance-level visual datastreams. |
Conclusions | In this paper, we presented a multimodal approach for utterance-level sentiment classification . |
Conclusions | Table 4: Video-level sentiment classification with linguistic, acoustic, and visual features. |
Discussion | The experimental results show that sentiment classification can be effectively performed on multimodal datastreams. |
Discussion | Other informative features for sentiment classification are the voice probability, representing the energy in speech, the combined visual features that represent an angry face, and two of the cepstral coefficients. |
Discussion | To understand the role played by the size of the video-segments considered in the sentiment classification experiments, as well as the potential effect of a speaker-independence assumption, we also run a set of experiments where we use full videos for the classification. |
Experiments and Results | We run our sentiment classification experiments on the MOUD dataset introduced earlier. |
Experiments and Results | Table 2: Utterance-level sentiment classification with linguistic, acoustic, and visual features. |
Experiments and Results | Table 2 shows the results of the utterance-level sentiment classification experiments. |
Introduction | Our experiments and results on multimodal sentiment classification are presented in Section 5, with a detailed discussion and analysis in Section 6. |
Multimodal Sentiment Analysis | These simple weighted unigram features have been successfully used in the past to build sentiment classifiers on text, and in conjunction with Support Vector Machines (SVM) have been shown to lead to state-of-the-art performance (Maas et al., 2011). |
Abstract | Expensive feature engineering based on WordNet senses has been shown to be useful for document level sentiment classification . |
Clustering for Cross Lingual Sentiment Analysis | The language whose annotated data is used for training is called the source language (8), while the language whose documents are to be sentiment classified is referred to as the target language (T). |
Clustering for Cross Lingual Sentiment Analysis | Algorithm 1 Projection based on sense Input: Polarity labeled data in source language (S) and data in target language (T) to be labeled Output: Classified documents 1: Sense mark the polarity labeled data from S 2: Project the sense marked corpora from S to T using a Multidict 3: Model the sentiment classifier using the data obtained in step-2 4: Sense mark the unlabelled data from T 5: Test the sentiment classifier on data obtained in step-4 using model obtained in step-3 |
Clustering for Sentiment Analysis | (2011) showed that WordNet synsets can act as good features for document level sentiment classification . |
Clustering for Sentiment Analysis | In this study, synset identifiers are extracted from manually/automatically sense annotated corpora and used as features for creating sentiment classifiers . |
Clustering for Sentiment Analysis | of sentiment classification , cluster identifiers |
Discussions | Whereas, sentiment classifier using sense (PS) or direct cluster linking (DCL) is not very effective. |
Experimental Setup | SVM was used since it is known to perform well for sentiment classification (Pang et al., 2002). |
Introduction | Word clustering is a powerful mechanism to “transfer” a sentiment classifier from one language to another. |
Abstract | We show improvement on the task of sentiment classification with respect to several baselines, and observe that the approach is most useful when the training set is sufficiently small. |
Future Work | While “semantic smoothing” obtained from introducing an external embedding helps to improve performance in the sentiment classification task, the method does not help to re-embed words that do not appear in the training set to begin with. |
Related Work | The most relevant to our contribution is the work by Maas et.al (2011), where word vectors are learned specifically for sentiment classification . |
Results and Discussion | Source embeddings: We find C&W embeddings to perform best for the task of sentiment classification . |
Problem Definition | 4.1.3 Sentiment Classification Data |
Problem Definition | In the domain of Sentiment Classification , we tested on the Amazon dataset from (Blitzer et al., 2007). |
Problem Definition | In the Amazon Sentiment Classification data set, the task is to determine whether a review is positive or negative based solely on the reviewer’s submitted text. |