Index of papers in Proc. ACL 2011 that mention
  • learning algorithm
Hoffmann, Raphael and Zhang, Congle and Ling, Xiao and Zettlemoyer, Luke and Weld, Daniel S.
Abstract
Recently, researchers have developed multi-instance learning algorithms to combat the noisy training data that can come from heuristic labeling, but their models assume relations are disjoint — for example they cannot extract the pair Founded(Jobs, Apple) and CEO—of (Jobs, Apple).
Conclusion
Since the processs of matching database tuples to sentences is inherently heuristic, researchers have proposed multi-instance learning algorithms as a means for coping with the resulting noisy data.
Learning
We now present a multi-instance learning algorithm for our weak-supervision model that treats the sentence-level extraction random variables Z,- as latent, and uses facts from a database (6. g., Freebase) as supervision for the aggregate-level variables Y7".
Modeling Overlapping Relations
Figure 2: The MULTIR Learning Algorithm
learning algorithm is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Titov, Ivan
Introduction
However, most learning algorithms operate under assumption that the learning data originates from the same distribution as the test data, though in practice this assumption is often violated.
Introduction
We explain how the introduced regularizer can be integrated into the stochastic gradient descent learning algorithm for our model.
Learning and Inference
In this section we describe an approximate learning algorithm based on the mean-field approximation.
Learning and Inference
Though we believe that our approach is independent of the specific learning algorithm , we provide the description for completeness.
learning algorithm is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Bansal, Mohit and Klein, Dan
Analysis
We next investigate the features that were given high weight by our learning algorithm (in the constituent parsing case).
Web-count Features
A learning algorithm can then weight features so that they compare appropriately
Web-count Features
As discussed in Section 5, the top features learned by our learning algorithm duplicate the handcrafted configurations used in previous work (Nakov and Hearst, 2005b) but also add numerous others, and, of course, apply to many more attachment types.
learning algorithm is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Berant, Jonathan and Dagan, Ido and Goldberger, Jacob
Experimental Evaluation
(b) DIRT: (Lin and Pantel, 2001) a widely-used rule learning algorithm .
Experimental Evaluation
(c) BInc: (Szpektor and Dagan, 2008) a directional rule learning algorithm .
Learning Typed Entailment Graphs
Our learning algorithm is composed of two steps: (1) Given a set of typed predicates and their instances extracted from a corpus, we train a (local) entailment classifier that estimates for every pair of predicates whether one entails the other.
learning algorithm is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: