Bootstrapping | Abney (2004) defines useful notation for semi-supervised learning, shown in table 1. |
Existing algorithms 3.1 Yarowsky | Haffari and Sarkar (2007) suggest a bipartite graph framework for semi-supervised learning based on their analysis of Y— l/DL-l-VS and objective (2). |
Existing algorithms 3.1 Yarowsky | 3.7 Semi-supervised learning algorithm of Subramanya et al. |
Existing algorithms 3.1 Yarowsky | (2010) give a semi-supervised algorithm for part of speech tagging. |
Graph propagation | Note that (3) is independent of their specific graph structure, distributions, and semi-supervised learning algorithm. |
Introduction | In this paper, we are concerned with a case of semi-supervised learning that is close to unsupervised learning, in that the labelled and unlabelled data points are from the same domain and only a small set of seed rules is used to derive the labelled points. |
Introduction | In contrast, typical semi-supervised learning deals with a large number of labelled points, and a domain adaptation task with unlabelled points from the new domain. |
Conclusion and Future Work | The features and weights are tuned with an iterative semi-supervised method. |
Experiments and Results | To perform consensus-based re-ranking, we first use the baseline decoder to get the n-best list for each sentence of development and test data, then we create graph using the n-best lists and training data as we described in section 5.1, and perform semi-supervised training as mentioned in section 4.3. |
Features and Training | Algorithm 1 Semi-Supervised Learning |
Features and Training | Algorithm 1 outlines our semi-supervised method for such alternative training. |
Graph-based Translation Consensus | Before elaborating how the graph model of consensus is constructed for both a decoder and N-best output re-ranking in section 5, we will describe how the consensus features and their feature weights can be trained in a semi-supervised way, in section 4. |
Introduction | Alexandrescu and Kirchhoff (2009) proposed a graph-based semi-supervised model to re-rank n-best translation output. |
Experiments | Suzuki2009 (Suzuki et al., 2009) reported the best reported result by combining a Semi-supervised Structured Conditional Model (Suzuki and Isozaki, 2008) with the method of (Koo et al., 2008). |
Experiments | G denotes the supervised graph-based parsers, S denotes the graph-based parsers with semi-supervised methods, D denotes our new parsers |
Related work | (2009) presented a semi-supervised learning approach. |
Related work | They extended a Semi-supervised Structured Conditional Model (SS-SCM)(Suzuki and Isozaki, 2008) to the dependency parsing problem and combined their method with the approach of Koo et al. |
Introduction | (2009) proposed a rule-based semi-supervised learning methods for lexicon extraction. |
Introduction | Semi-Supervised Method (Semi) we implement the double propagation model proposed in (Qiu et al., 2009). |
Introduction | The relational bootstrapping method performs better than the unsupervised method, TrAdaBoost and the cross-domain CRF algorithm, and achieves comparable results with the semi-supervised method. |
Introduction | Semi-Supervised Modeling |
Introduction | With seeds, our models are thus semi-supervised and need a different formulation. |
Related Work | In (Lu and Zhai, 2008), a semi-supervised model was proposed. |