Conclusions | In this paper we have presented WiBi, an automatic 3—phase approach to the construction of a bitaxonomy for the English Wikipedia, i.e., a full-fledged, integrated page and category taxonomy: first, using a set of high-precision linkers, the page taxonomy is populated; next, a fixed point algorithm populates the category taxonomy while enriching the page taxonomy iteratively ; finally, the category taxonomy undergoes structural refinements. |
Phase 1: Inducing the Page Taxonomy | Finally, to capture multiple hypernyms, we iteratively follow the conj_and and conj_or relations starting from the initially extracted hypernym. |
Phase 2: Inducing the Bitaxonomy | In the following we describe the core algorithm of our approach, which iteratively and mutually populates and refines the edge sets E(Tp) and E (To). |
Phase 3: Category taxonomy refinement | Figure 4b shows the performance trend as the algorithm iteratively covers more and more categories. |
Related Work | Our work differs from the others in at least three respects: first, in marked contrast to most other resources, but similarly to WikiNet and WikiTaxonomy, our resource is self-contained and does not depend on other resources such as WordNet; second, we address the taxonomization task on both sides, i.e., pages and categories, by providing an algorithm which mutually and iteratively transfers knowledge from one side of the bitaxonomy to the other; third, we provide a wide coverage bitaxonomy closer in structure and granularity to a manual WordNet-like taxonomy, in contrast, for example, to DBpedia’s flat entity-focused hierarchy.2 |
Abstract | We describe computationally cheap feature weighting techniques and a novel nonlinear distribution spreading algorithm that can be used to iteratively and interactively correcting mislabeled instances to significantly improve annotation quality at low cost. |
Introduction | The process of selecting and relabeling data points can be conducted with multiple rounds to iteratively improve the data quality. |
Introduction | An active learner uses a small set of labeled data to iteratively select the most informative instances from a large pool of unlabeled data for human annotators to label (Settles, 2010). |
Introduction | In this work, we borrow the idea of active learning to interactively and iteratively correct labeling errors. |
Related Work | (2012) propose a solution called Active Label Correction (ALC) which iteratively presents the experts with small sets of suspected mislabeled instances at each round. |
Intervention Prediction Models | The model uses the pseudocode shown in Algorithm 1 to iteratively refine the weight vectors. |
Intervention Prediction Models | Exploiting the semi-convexity property (Felzenszwalb et al., 2010), the algorithm works in two steps, each executed iteratively . |
Intervention Prediction Models | The algorithm then performs two step iteratively - first it determines the structural assignments for the negative examples, and then optimizes the fixed objective function using a cutting plane algorithm. |
Model | Attributes are initialized using only text features, maximizing \I/te $t(e, Xi), and ignoring network information. |
Model | Then for each user we iteratively reestimate their profile given both their text features and network features (computed based on the current predictions made for their friends) which provide additional evidence. |
Model | Then we iteratively update .2," given |
Problem Description | One solution to this problem is to do he alignment greedily pairwise, starting from the most recent medical event sequences, finding the test path, and iteratively moving on to the next equence, and proceeding until the oldest medial event sequence. |
Problem Description | Thus, for MSA using dynamic programming, we use a heuristic method where we combine pairwise alignments iteratively starting with the latest narrative and progressing towards the oldest narrative. |
Problem Description | Aligning pairwise iteratively gives us an overall average accuracy of 68.2% similar to dynamic programming. |
Experiments | ReNew starts with LIWC and a labeled dataset and generates ten lexicons and sentiment classification models by iteratively learning 4,017 unlabeled reviews without any human guidance. |
Related Work | Hu and Liu (2004), manually collect a small set of sentiment words and expand it iteratively by searching synonyms and antonyms in WordNet (Miller, 1995). |
Related Work | Esuli and Sebas-tiani (2006) use a set of classifiers in a semi-supervised fashion to iteratively expand a manu- |