Introduction | It is well-known that sentiment classification is very domain-specific (Blitzer et al., 2007), so it is critical to eliminate its dependence on a large-scale labeled data for its wide applications. |
Related Work | Supervised methods consider sentiment classification as a standard classification problem in which labeled data in a domain are used to train a domain-specific classifier. |
Unsupervised Mining of Personal and Impersonal Views | The co-training algorithm is a specific semi-supervised learning approach which starts with a set of labeled data and increases the amount of labeled data using the unlabeled data by bootstrapping (Blum and Mitchell, 1998). |
Unsupervised Mining of Personal and Impersonal Views | Input: The labeled data L containing personal sentence set Sbmmwl and impersonal sentence set |
Unsupervised Mining of Personal and Impersonal Views | U — persona SU —impersonal Output: New labeled data L Procedure: |
Cross-Language Structural Correspondence Learning | The confidence, however, can only be determined for 2125 since the setting gives us access to labeled data from 8 only. |
Related Work | In the basic domain adaptation setting we are given labeled data from the source domain and unlabeled data from the target domain, and the goal is to train a classifier for the target domain. |
Related Work | Beyond this setting one can further distinguish whether a small amount of labeled data from the target domain is available (Daume, 2007; Finkel and Manning, 2009) or not (Blitzer et al., 2006; Jiang and Zhai, 2007). |
Related Work | (2007) apply structural learning to image classification in settings where little labeled data is given. |