Introduction | After new topic words are extracted in the movie domain, we can apply the same syntactic pattern or other syntactic patterns to extract new sentiment and topic words iteratively . |
Introduction | Bootstrapping is the process of improving the performance of a weak classifier by iteratively adding training data and retraining the classifier. |
Introduction | More specifically, bootstrapping starts with a small set of labeled “seeds”, and iteratively adds unlabeled |
Mining Dependency Trees | Next, the algorithm iteratively enumerates the subtrees occurring in the input data in increasing size order and associating each subtree t with two occurrence lists namely, the list of input trees in which t occurs and for which generation was successful (PASS (1%)); and the list of input trees in which t occurs and for which generation failed (FAIL(t)). |
Mining Trees | The join and extension operations used to iteratively enumerate subtrees are depicted in Figure 2 and can be defined as follows. |
Related Work | The approach was later extended and refined in (Sagot and de la Clergerie, 2006) and (de Kok et al., 2009) whereby (Sagot and de la Clergerie, 2006) defines a suspicion rate for n—grams which takes into account the number of occurrences of a given word form and iteratively defines the suspicion rate of each word form in a sentence based on the suspicion rate of this word form in the corpus; (de Kok et al., 2009) combined the iterative error mining proposed by (Sagot and de la Clergerie, 2006) with expansion of forms to n—grams of words and POS tags of arbitrary length. |
Abstract | For training, we derive growth transformations for phrase and lexicon translation probabilities to iteratively improve the objective. |
Abstract | In this section, we derived GT formulas for iteratively updating the parameters so as to optimize objective (9). |
Abstract | Baum-Eagon inequality (Baum and Eagon, 1967) gives the GT formula to iteratively maximize positive-coefficient polynomials of random |