Index of papers in Proc. ACL 2012 that mention
  • generative model
Takamatsu, Shingo and Sato, Issei and Nakagawa, Hiroshi
Abstract
We present a novel generative model that directly models the heuristic labeling process of distant supervision.
Conclusion
Our generative model directly models the labeling process of DS and predicts patterns that are wrongly labeled with a relation.
Experiments
Experiment 1 aimed to evaluate the performance of our generative model itself, which predicts whether a pattern expresses a relation, given a labeled corpus created with the DS assumption.
Experiments
In our method, we trained a classifier with a labeled corpus cleaned by Algorithm 1 using the negative pattern list predicted by the generative model .
Experiments
While our generative model does not use unlabeled examples as negative ones in detecting wrong labels, classifier-based approaches including MultiR do, suffering from false negatives.
Generative Model
We now describe our generative model , which predicts whether a pattern expresses relation 7“ or not via hidden variables.
Introduction
0 To make the pattern prediction, we propose a generative model that directly models the process of automatic labeling in DS.
Introduction
0 Our variational inference for our generative model lets us automatically calibrate parameters for each relation, which are sensitive to the performance (see Section 6).
Related Work
In our approach, parameters are calibrated for each relation by maximizing the likelihood of our generative model .
Wrong Label Reduction
In the first step, we introduce the novel generative model that directly models DS’s labeling process and make the prediction (see Section 5).
generative model is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Yao, Limin and Riedel, Sebastian and McCallum, Andrew
Conclusion
Experimental results show our approach discovers precise relation clusters and outperforms a generative model approach and a clustering method which does not address sense disambiguation.
Evaluations
The generative model approach with 300 topics achieves similar precision to the hierarchical clustering approach.
Evaluations
With more topics, the precision increases, however, the recall of the generative model is much lower than those of other approaches.
Evaluations
The generative model approach produces more coherent clusters when the number of relation topics increases.
Experiments
We compare our approach against several baseline systems, including a generative model approach and variations of our own approach.
Experiments
Rel-LDA: Generative models have been successfully applied to unsupervised relation extraction (Rink and Harabagiu, 2011; Yao et al., 2011).
Introduction
We compare our approach with several baseline systems, including a generative model approach, a clustering method that does not disambiguate between senses, and our approach with different features.
Our Approach
The two theme features are extracted from generative models , and each is a topic number.
Related Work
There has been considerable interest in unsupervised relation discovery, including clustering approach, generative models and many other approaches.
Related Work
Our approach employs generative models for path sense disambiguation, which achieves better performance than directly applying generative models to unsupervised relation discovery.
generative model is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Druck, Gregory and Pang, Bo
Abstract
In this paper, we propose a generative model that jointly identifies user-proposed refinements in instruction reviews at multiple granularities, and aligns them to the appropriate steps in the original instructions.
Conclusion and Future Work
In this paper, we developed unsupervised methods based on generative models for mining refinements to online instructions from reviews.
Introduction
Motivated by this, we propose a generative model for solving these tasks jointly without labeled data.
Models
To identify refinements without labeled data, we propose a generative model of reviews (or more generally documents) with latent variables.
Models
Foulds and Smyth (2011) propose a generative model for MIL in which the generation of the bag label y is conditioned on the instance labels 2.
Related Work
We propose a generative model that makes predictions at both the review and review segment level.
generative model is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Pauls, Adam and Klein, Dan
Experiments
Table 2: Perplexity of several generative models on Section 0 of the WSJ.
Experiments
Our model outperforms all other generative models , though the improvement over the 71- gram model is not statistically significant.
Experiments
We would like to use our model to make grammaticality judgements, but as a generative model it can only provide us with probabilities.
generative model is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Tang, Hao and Keshet, Joseph and Livescu, Karen
Abstract
Most previous approaches have involved generative modeling of the distribution of pronunciations, usually trained to maximize likelihood.
Introduction
In other words, these approaches optimize generative models using discriminative criteria.
Introduction
We propose a general, flexible discriminative approach to pronunciation modeling, rather than dis-criminatively optimizing a generative model .
Introduction
For generative models , phonetic error rate of generated pronunciations (Venkataramani and Byme, 2001) and
generative model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Wang, William Yang and Mayfield, Elijah and Naidu, Suresh and Dittmar, Jeremiah
Introduction
SAGE (Eisenstein et al., 2011a), a recently proposed sparse additive generative model of language, addresses many of the drawbacks of LDA.
Prediction Experiments
In the second experiment, in addition to the linear kernel SVM, we also compare our SME model to a state-of-the-art sparse generative model of text (Eisenstein et al., 2011a), and vary the size of input vocabulary W exponentially from 29 to the full size of our training vocabulary4.
Prediction Experiments
In this experiment, we compare SME with a state-of-the-art sparse generative model : SAGE (Eisenstein et al., 2011a).
Prediction Experiments
Unlike hierarchical Dirichlet processes (Teh et al., 2006), in parametric Bayesian generative models , the number of topics K is often set manually, and can influence the model’s accuracy significantly.
generative model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Alfonseca, Enrique and Filippova, Katja and Delort, Jean-Yves and Garrido, Guillermo
Introduction
Some techniques that have been used are Markov Random Fields (Poon and Domingos, 2009) and Bayesian generative models (Titov and Klemen-tiev, 2011).
Unsupervised relational pattern learning
Figure 2: Plate diagram of the generative model used.
Unsupervised relational pattern learning
Generative model Once these collections are built, we use the generative model from Figure 2 to learn the probability that a dependency path is conveying some relation between the entities it connects.
generative model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Chambers, Nathanael
Abstract
This model alone improves on previous generative models by 77%.
Introduction
Our model outperforms the generative models of previous work by 77%.
Timestamp Classifiers
We thus begin with a bag-of-words approach, reproducing the generative model used by both de J ong (2005) and Kanhabua and Norvag (2008; 2009).
generative model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Nguyen, Viet-An and Boyd-Graber, Jordan and Resnik, Philip
Modeling Multiparty Discussions
These topics are part of a generative model posited to have produced a corpus.
Modeling Multiparty Discussions
In this section, we develop SITS, a generative model of multiparty discourse that jointly discovers topics and speaker-specific topic shifts from an unannotated corpus (Figure la).
Topic Segmentation as a Social Process
Topic segmentation approaches range from simple heuristic methods based on lexical similarity (Morris and Hirst, 1991; Hearst, 1997) to more intricate generative models and supervised methods (Georgescul et al., 2006; Purver et al., 2006; Gruber et al., 2007; Eisenstein and Barzilay, 2008), which have been shown to outperform the established heuristics.
generative model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: