Index of papers in Proc. ACL 2010 that mention
  • semi-supervised
Li, Shoushan and Huang, Chu-Ren and Zhou, Guodong and Lee, Sophia Yat Mei
Abstract
In this paper, we adopt two views, personal and impersonal views, and systematically employ them in both supervised and semi-supervised sentiment classification.
Abstract
On this basis, an ensemble method and a co—training algorithm are explored to employ the two views in supervised and semi-supervised sentiment classification respectively.
Introduction
Since the unlabeled data is ample and easy to collect, a successful semi-supervised sentiment classification system would significantly minimize the involvement of labor and time.
Introduction
Therefore, given the two different views mentioned above, one promising application is to adopt them in co-training algorithms, which has been proven to be an effective semi-supervised learning strategy of incorporating unlabeled data to further improve the classification performance (Zhu, 2005).
Introduction
In this paper, we systematically employ personal/impersonal views in supervised and semi-supervised sentiment classification.
Related Work
Generally, document-level sentiment classification methods can be categorized into three types: unsupervised, supervised, and semi-supervised .
Related Work
Semi-supervised methods combine unlabeled data with labeled training data (often small-scaled) to improve the models.
Related Work
Compared to the supervised and unsupervised methods, semi-supervised methods for sentiment classification are relatively new and have much less related studies.
semi-supervised is mentioned in 30 sentences in this paper.
Topics mentioned in this paper:
Titov, Ivan and Kozhevnikov, Mikhail
Empirical Evaluation
In this section, we consider the semi-supervised setup, and present evaluation of our approach on on the problem of aligning weather forecast reports to the formal representation of weather.
Empirical Evaluation
Only then, in the semi-supervised learning scenarios, we added unlabeled data and ran 5 additional iterations of EM.
Empirical Evaluation
We compare our approach (Semi-superv, non-contr) with two baselines: the basic supervised training on 100 labeled forecasts (Supervised BL) and with the semi-supervised training which disregards the non-contradiction relations (Semi-superv BL).
Inference with NonContradictory Documents
However, in a semi-supervised or unsupervised case variational techniques, such as the EM algorithm (Dempster et al., 1977), are often used to estimate the model.
Introduction
Such annotated resources are scarce and expensive to create, motivating the need for unsupervised or semi-supervised techniques (Poon and Domingos, 2009).
Introduction
This compares favorably with 69.1% shown by a semi-supervised learning approach, though, as expected, does not reach the score of the model which, in training, observed semantics states for all the 750 documents (77.7% F1).
Summary and Future Work
Our approach resulted in an improvement over the scores of both the supervised baseline and of the traditional semi-supervised leam-ing.
semi-supervised is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Turian, Joseph and Ratinov, Lev-Arie and Bengio, Yoshua
Introduction
By using unlabelled data to reduce data sparsity in the labeled training data, semi-supervised approaches improve generalization accuracy.
Introduction
Semi-supervised models such as Ando and Zhang (2005), Suzuki and Isozaki (2008), and Suzuki et al.
Introduction
It can be tricky and time-consuming to adapt an existing supervised NLP system to use these semi-supervised techniques.
Supervised evaluation tasks
This technique for turning a supervised approach into a semi-supervised one is general and task-agnostic.
Supervised evaluation tasks
We apply clustering and distributed representations to NER and chunking, which allows us to compare our semi-supervised models to those of Ando and Zhang (2005) and Suzuki and Isozaki (2008).
Unlabled Data
Ando and Zhang (2005) present a semi-supervised learning algorithm called alternating structure optimization (ASO).
Unlabled Data
Suzuki and Isozaki (2008) present a semi-supervised extension of CRFs.
Unlabled Data
(2009), they extend their semi-supervised approach to more general conditional models.)
semi-supervised is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Dhillon, Paramveer S. and Talukdar, Partha Pratim and Crammer, Koby
A <— METRICLEARNER(X, 3,1?)
3.3 Semi-Supervised Classification
A <— METRICLEARNER(X, 3,1?)
In this section, we trained the GRF classifier (see Equation 3), a graph-based semi-supervised leam-ing (SSL) algorithm (Zhu et al., 2003), using Gaussian kernel parameterized by A = FTP to set edge weights.
Abstract
We initiate a study comparing effectiveness of the transformed spaces learned by recently proposed supervised, and semi-supervised metric learning algorithms to those generated by previously proposed unsupervised dimensionality reduction methods (e.g., PCA).
Abstract
Through a variety of experiments on different real-world datasets, we find IDML—IT, a semi-supervised metric learning algorithm to be the most effective.
Conclusion
In this paper, we compared the effectiveness of the transformed spaces learned by recently proposed supervised, and semi-supervised metric learning algorithms to those generated by previously proposed unsupervised dimensionality reduction methods (e.g., PCA).
Conclusion
Through a variety of experiments on different real-world NLP datasets, we demonstrated that supervised as well as semi-supervised classifiers trained on the space learned by IDML—IT consistently result in the lowest classification errors.
Introduction
Even though different supervised and semi-supervised metric learning algorithms have recently been proposed, effectiveness of the transformed spaces learned by them in NLP
Introduction
We find IDML-IT, a semi-supervised metric learning algorithm to be the most effective.
Metric Learning
2.3 Inference-Driven Metric Learning (IDML): Semi-Supervised
Metric Learning
Since we are focusing on the semi-supervised learning (SSL) setting with n; labeled and nu unlabeled instances, the idea is to automatically label the unlabeled instances using a graph based SSL algorithm, and then include instances with low assigned label entropy (i.e., high confidence label assignments) in the next round of metric learning.
semi-supervised is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Hassan, Ahmed and Radev, Dragomir R.
Abstract
The method could be used both in a semi-supervised setting where a training set of labeled words is used, and in an unsupervised setting where a handful of seeds is used to define the two polarity classes.
Abstract
It outperforms the state of the art methods in the semi-supervised setting.
Conclusions
The proposed method can be used in a semi-supervised setting where a training set of labeled words is used, and in an unsupervised setting where only a handful of seeds is used to define the two polarity classes.
Experiments
This method could be used in a semi-supervised setting where a set of labeled words are used and the system learns from these labeled nodes and from other unlabeled nodes.
Introduction
Previous work on identifying the semantic orientation of words has addressed the problem as both a semi-supervised (Takamura et al., 2005) and an unsupervised (Turney and Littman, 2003) learning problem.
Introduction
In the semi-supervised setting, a training set of labeled words
Introduction
The proposed method could be used both in a semi-supervised and in an unsupervised setting.
Word Polarity
This view is closely related to the partially labeled classification with random walks approach in (Szummer and J aakkola, 2002) and the semi-supervised learning using harmonic functions approach in (Zhu et al., 2003).
semi-supervised is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Croce, Danilo and Giannone, Cristina and Annesi, Paolo and Basili, Roberto
Conclusions
In this paper, a distributional approach for acquiring a semi-supervised model of argument classification (AC) preferences has been proposed.
Conclusions
Moreover, dimensionality reduction methods alternative to LSA, as currently studied on semi-supervised spectral learning (Johnson and Zhang, 2008), will be experimented.
Introduction
Finally, the application of semi-supervised learning is attempted to increase the lexical expressiveness of the model, e.g.
Introduction
A semi-supervised statistical model exploiting useful lexical information from unlabeled corpora is proposed.
Related Work
Accordingly a semi-supervised approach for reducing the costs of the manual annotation effort is proposed.
Related Work
It embodies the idea that a multitask learning architecture coupled with semi-supervised learning can be effectively applied even to complex linguistic tasks such as SRL.
semi-supervised is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Hoffmann, Raphael and Zhang, Congle and Weld, Daniel S.
Extraction with Lexicons
Then Section 4.2 presents our semi-supervised algorithm for learning semantic lexicons from these lists.
Extraction with Lexicons
4.2 Semi-Supervised Learning of Lexicons
Introduction
When learning an extractor for relation R, LUCHS extracts seed phrases from R’s training data and uses a semi-supervised learning algorithm to create several relation-specific lexicons at different points on a precision-recall spectrum.
semi-supervised is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Huang, Fei and Yates, Alexander
Introduction
Furstenau and Lapata (2009b; 2009a) use semi-supervised techniques to automatically annotate data for previously unseen predicates with semantic role information.
Introduction
(2008) use deep learning techniques based on semi-supervised em-beddings to improve an SRL system, though their tests are on in-domain data.
Introduction
Unsupervised SRL systems (Swier and Stevenson, 2004; Grenager and Manning, 2006; Abend et al., 2009) can naturally be ported to new domains with little trouble, but their accuracy thus far falls short of state-of-the-art supervised and semi-supervised systems.
semi-supervised is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Ravi, Sujith and Baldridge, Jason and Knight, Kevin
Data
We use the standard splits of the data used in semi-supervised tagging experiments (e. g. Banko and Moore (2004)): sections 0-18 for training, 19-21 for development, and 22-24 for test.
Experiments
The HMM when using full supervision obtains 87.6% accuracy (Baldridge, 2008),8 so the accuracy of 63.8% achieved by EMGI+IPGI nearly halves the gap between the supervised model and the 45.6% obtained by basic EM semi-supervised model.
Introduction
This provides a much more challenging starting point for the semi-supervised methods typically applied to the task.
semi-supervised is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: