Index of papers in Proc. ACL that mention
  • iteratively
Lang, Joel and Lapata, Mirella
Abstract
We present an algorithm that iteratively splits and merges clusters representing semantic roles, thereby leading from an initial clustering to a final clustering of better quality.
Conclusions
We proposed a split-merge algorithm that iteratively manipulates clusters representing semantic roles whilst trading off cluster purity with collocation.
Conclusions
itive and requires no manual effort for training.
Related Work
Swier and Stevenson (2004) induce role labels with a bootstrapping scheme where the set of labeled instances is iteratively expanded using a classifier trained on previously labeled instances.
Related Work
We formulate the induction of semantic roles as a clustering problem and propose a split-merge algorithm which iteratively manipulates clusters representing semantic roles.
Split-Merge Role Induction
Our algorithm works by iteratively splitting and merging clusters of argument instances in order to arrive at increasingly accurate representations of semantic roles.
Split-Merge Role Induction
Then [3 is iteratively decreased again until it becomes zero, after which 7 is decreased by another 0.05.
iteratively is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Li, Fangtao and Pan, Sinno Jialin and Jin, Ou and Yang, Qiang and Zhu, Xiaoyan
Introduction
After new topic words are extracted in the movie domain, we can apply the same syntactic pattern or other syntactic patterns to extract new sentiment and topic words iteratively .
Introduction
Bootstrapping is the process of improving the performance of a weak classifier by iteratively adding training data and retraining the classifier.
Introduction
More specifically, bootstrapping starts with a small set of labeled “seeds”, and iteratively adds unlabeled
iteratively is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Martineau, Justin and Chen, Lu and Cheng, Doreen and Sheth, Amit
Abstract
We describe computationally cheap feature weighting techniques and a novel nonlinear distribution spreading algorithm that can be used to iteratively and interactively correcting mislabeled instances to significantly improve annotation quality at low cost.
Introduction
The process of selecting and relabeling data points can be conducted with multiple rounds to iteratively improve the data quality.
Introduction
An active learner uses a small set of labeled data to iteratively select the most informative instances from a large pool of unlabeled data for human annotators to label (Settles, 2010).
Introduction
In this work, we borrow the idea of active learning to interactively and iteratively correct labeling errors.
Related Work
(2012) propose a solution called Active Label Correction (ALC) which iteratively presents the experts with small sets of suspected mislabeled instances at each round.
iteratively is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Flati, Tiziano and Vannella, Daniele and Pasini, Tommaso and Navigli, Roberto
Conclusions
In this paper we have presented WiBi, an automatic 3—phase approach to the construction of a bitaxonomy for the English Wikipedia, i.e., a full-fledged, integrated page and category taxonomy: first, using a set of high-precision linkers, the page taxonomy is populated; next, a fixed point algorithm populates the category taxonomy while enriching the page taxonomy iteratively ; finally, the category taxonomy undergoes structural refinements.
Phase 1: Inducing the Page Taxonomy
Finally, to capture multiple hypernyms, we iteratively follow the conj_and and conj_or relations starting from the initially extracted hypernym.
Phase 2: Inducing the Bitaxonomy
In the following we describe the core algorithm of our approach, which iteratively and mutually populates and refines the edge sets E(Tp) and E (To).
Phase 3: Category taxonomy refinement
Figure 4b shows the performance trend as the algorithm iteratively covers more and more categories.
Related Work
Our work differs from the others in at least three respects: first, in marked contrast to most other resources, but similarly to WikiNet and WikiTaxonomy, our resource is self-contained and does not depend on other resources such as WordNet; second, we address the taxonomization task on both sides, i.e., pages and categories, by providing an algorithm which mutually and iteratively transfers knowledge from one side of the bitaxonomy to the other; third, we provide a wide coverage bitaxonomy closer in structure and granularity to a manual WordNet-like taxonomy, in contrast, for example, to DBpedia’s flat entity-focused hierarchy.2
iteratively is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Reichart, Roi and Korhonen, Anna
The Unified Framework
Cluster set construction In its while loop, the algorithm iteratively generates fixed-size cluster sets such that each data point belongs to exactly one cluster in one set.
The Unified Framework
Then, it gradually extends the clusters by iteratively mapping the samples, in decreasing order of probability, to the existing clusters (the mlMappz'ng function).
The Unified Framework
By iteratively extending the clusters with high probability subsets, we thus expect each cluster set to consist of clusters that demonstrate these properties.
iteratively is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Eidelman, Vladimir and Marton, Yuval and Resnik, Philip
The Relative Margin Machine in SMT
17: for n <— 1...Maa: Iter do
The Relative Margin Machine in SMT
27: forn <— 1...Maa: Iter do
The Relative Margin Machine in SMT
36: forn <— 1...Maa: Iter do
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhang, Zhe and Singh, Munindar P.
Experiments
ReNew starts with LIWC and a labeled dataset and generates ten lexicons and sentiment classification models by iteratively learning 4,017 unlabeled reviews without any human guidance.
Related Work
Hu and Liu (2004), manually collect a small set of sentiment words and expand it iteratively by searching synonyms and antonyms in WordNet (Miller, 1995).
Related Work
Esuli and Sebas-tiani (2006) use a set of classifiers in a semi-supervised fashion to iteratively expand a manu-
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Raghavan, Preethi and Fosler-Lussier, Eric and Elhadad, Noémie and Lai, Albert M.
Problem Description
One solution to this problem is to do he alignment greedily pairwise, starting from the most recent medical event sequences, finding the test path, and iteratively moving on to the next equence, and proceeding until the oldest medial event sequence.
Problem Description
Thus, for MSA using dynamic programming, we use a heuristic method where we combine pairwise alignments iteratively starting with the latest narrative and progressing towards the oldest narrative.
Problem Description
Aligning pairwise iteratively gives us an overall average accuracy of 68.2% similar to dynamic programming.
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Li, Jiwei and Ritter, Alan and Hovy, Eduard
Model
Attributes are initialized using only text features, maximizing \I/te $t(e, Xi), and ignoring network information.
Model
Then for each user we iteratively reestimate their profile given both their text features and network features (computed based on the current predictions made for their friends) which provide additional evidence.
Model
Then we iteratively update .2," given
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Chaturvedi, Snigdha and Goldwasser, Dan and Daumé III, Hal
Intervention Prediction Models
The model uses the pseudocode shown in Algorithm 1 to iteratively refine the weight vectors.
Intervention Prediction Models
Exploiting the semi-convexity property (Felzenszwalb et al., 2010), the algorithm works in two steps, each executed iteratively .
Intervention Prediction Models
The algorithm then performs two step iteratively - first it determines the structural assignments for the negative examples, and then optimizes the fixed objective function using a cutting plane algorithm.
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Schoenemann, Thomas
See e. g. the author’s course notes (in German), currently
for some (differentiable) function one iteratively starts at the current point {pkg computes the gradient VEi({pk(j and goes to the point
Training the New Variants
For computing alignments, we use the common procedure of hillclimbing where we start with an alignment, then iteratively compute the probabilities of all alignments differing by a move or a swap (Brown et al., 1993) and move to the best of these if it beats the current alignment.
Training the New Variants
Proper EM The expectation maximization (EM) framework (Dempster et al., 1977; Neal and Hinton, 1998) is a class of template procedures (rather than a proper algorithm) that iteratively requires solving the task
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Liu, Kang and Xu, Liheng and Zhao, Jun
Opinion Target Extraction Methodology
The alignment is updated iteratively until no additional inconsistent links can be removed.
Opinion Target Extraction Methodology
To estimate the confidence of each opinion target candidate, we employ a random walk algorithm on our graph, which iteratively computes the weighted average of opinion target confidences from neighboring vertices.
Related Work
Moreover, (Qiu et al., 2011) proposed a Double Propagation method to expand sentiment words and opinion targets iteratively , where they also exploited syntactic relations between words.
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Dasgupta, Sajib and Ng, Vincent
Evaluation
Specifically, we begin by training an inductive SVM on one labeled example from each class, iteratively labeling the most uncertain unlabeled point on each side of the hyperplane and retraining the SVM until 100 points are labeled.
Our Approach
In self-training, we iteratively train a classifier on the data labeled so far, use it to classify the unlabeled instances, and augment the labeled data with the most confidently labeled instances.
Our Approach
In our algorithm, we start with an initial clustering of all of the data points, and then iteratively remove the 04 most ambiguous points from the dataset and cluster the remaining points.
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
He, Xiaodong and Deng, Li
Abstract
For training, we derive growth transformations for phrase and lexicon translation probabilities to iteratively improve the objective.
Abstract
In this section, we derived GT formulas for iteratively updating the parameters so as to optimize objective (9).
Abstract
Baum-Eagon inequality (Baum and Eagon, 1967) gives the GT formula to iteratively maximize positive-coefficient polynomials of random
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Gardent, Claire and Narayan, Shashi
Mining Dependency Trees
Next, the algorithm iteratively enumerates the subtrees occurring in the input data in increasing size order and associating each subtree t with two occurrence lists namely, the list of input trees in which t occurs and for which generation was successful (PASS (1%)); and the list of input trees in which t occurs and for which generation failed (FAIL(t)).
Mining Trees
The join and extension operations used to iteratively enumerate subtrees are depicted in Figure 2 and can be defined as follows.
Related Work
The approach was later extended and refined in (Sagot and de la Clergerie, 2006) and (de Kok et al., 2009) whereby (Sagot and de la Clergerie, 2006) defines a suspicion rate for n—grams which takes into account the number of occurrences of a given word form and iteratively defines the suspicion rate of each word form in a sentence based on the suspicion rate of this word form in the corpus; (de Kok et al., 2009) combined the iterative error mining proposed by (Sagot and de la Clergerie, 2006) with expansion of forms to n—grams of words and POS tags of arbitrary length.
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Ravi, Sujith and Knight, Kevin
Machine Translation as a Decipherment Task
4For Iterative EM, we start with a channel of size 101x101 (K =100) and in every pass we iteratively increase the vocabulary sizes by 50, repeating the training procedure until the channel size becomes 351x351.
Word Substitution Decipherment
Instead of instantiating the entire channel model (with all its parameters), we iteratively train the model in small steps.
Word Substitution Decipherment
Goto Step 2 and repeat the procedure, extending the channel size iteratively in each stage.
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
DeNero, John and Macherey, Klaus
Introduction
In this approach, we iteratively apply the same efficient sequence algorithms for the underlying directional models, and thereby optimize a dual bound on the model objective.
Model Inference
In particular, we can iteratively apply exact inference to the subgraph problems, adjusting their potentials to reflect the constraints of the full problem.
Model Inference
We can iteratively search for such a 11 via sub-gradient descent.
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Lin, Shih-Hsiang and Chen, Berlin
A risk minimization framework for extractive summarization
sentences of a given document can be iteratively chosen (i.e., one at each iteration) from the document until the aggregated summary reaches a predefined target summarization ratio.
Proposed Methods
Once the sentence generative model P(13 | S j), the sentence prior model P(Sj) and the loss function L(Si,Sj) have been properly estimated, the summary sentences can be selected iteratively by (8) according to a predefined target summarization ratio.
Proposed Methods
To alleviate this problem, the concept of maximum marginal relevance (MMR) (Carbonell and Goldstein, 1998), which performs sentence selection iteratively by striking the balance between topic relevance and coverage, can be incorporated into the loss function:
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Niu, Zheng-Yu and Wang, Haifeng and Wu, Hua
Abstract
First we propose to employ an iteratively trained target grammar parser to perform grammar formalism conversion, eliminating predefined heuristic rules as required in previous methods.
Introduction
The procedure of tree conversion and parser retraining will be run iteratively until a stopping condition is satisfied.
Our Two-Step Solution
Previous DS to PS conversion methods built a converted tree by iteratively attaching nodes and edges to the tree with the help of conversion rules and heuristic rules, based on current head-dependent pair from a source dependency tree and the structure of the built tree (Collins et al., 1999; Covington, 1994; Xia and Palmer, 2001; Xia et al., 2008).
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Erk, Katrin and McCarthy, Diana and Gaylord, Nicholas
Related Work
Reported inter-annotator agreement ( ITA ) for fine-grained word sense assignment tasks has ranged between 69% (Kilgarriff and Rosenzweig, 2000) for a lexical sample using the HECTOR dictionary and 78.6% using WordNet (Landes et al., 1998) in all-words annotation.
Related Work
The use of more coarse-grained senses alleviates the problem: In OntoNotes (Hovy et al., 2006), an ITA of 90% is used as the criterion for the construction of coarse-grained sense distinctions.
Related Work
However, intriguingly, for some high-frequency lemmas such as leave this ITA threshold is not reached even after multiple re-partitionings of the semantic space (Chen and Palmer, 2009).
iteratively is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: