Index of papers in Proc. ACL 2010 that mention
  • LDA
Johnson, Mark
Abstract
Latent Dirichlet Allocation ( LDA ) models are used as “topic models” to produce a low-dimensional representation of documents, while Probabilistic Context-Free Grammars (PCFGs) define distributions over trees.
Abstract
The paper begins by showing that LDA topic models can be viewed as a special kind of PCFG, so Bayesian inference for PCFGs can be used to infer Topic Models as well.
Abstract
Exploiting the close relationship between LDA and PCFGs just described, we propose two novel probabilistic models that combine insights from LDA and AG models.
Introduction
Specifically, we show that an LDA model can be expressed as a certain kind of PCFG,
Introduction
so Bayesian inference for PCFGs can be used to learn LDA topic models as well.
Introduction
The importance of this observation is primarily theoretical, as current Bayesian inference algorithms for PCFGs are less efficient than those for LDA inference.
LDA is mentioned in 51 sentences in this paper.
Topics mentioned in this paper:
Mitchell, Jeff and Lapata, Mirella and Demberg, Vera and Keller, Frank
Integrating Semantic Constraint into Surprisal
The factor A(wn, h) is essentially based on a comparison between the vector representing the current word wn and the vector representing the prior history h. Varying the method for constructing word vectors (e. g., using LDA or a simpler semantic space model) and for combining them into a representation of the prior context h (e.g., using additive or multiplicative functions) produces distinct models of semantic composition.
Method
We also trained the LDA model on BLLIP, using the Gibb’s sampling procedure discussed in Griffiths et al.
Models of Processing Difficulty
LDA is a probabilistic topic model offering an alternative to spatial semantic representations.
Models of Processing Difficulty
Whereas in LSA words are represented as points in a multidimensional space, LDA represents words using topics.
Results
SSS Additive — .03820*** Multiplicative — .00895*** LDA Additive — .025 00
Results
Table 2: Coefficients of LME models including simple semantic space (SSS) or Latent Dirichlet Allocation ( LDA ) as factors; ***p < .001
Results
Besides, replicating Pynte et al.’s (2008) finding, we were also interested in assessing whether the underlying semantic representation (simple semantic space or LDA ) and composition function (additive versus multiplicative) modulate reading times differentially.
LDA is mentioned in 17 sentences in this paper.
Topics mentioned in this paper:
Ó Séaghdha, Diarmuid
Related work
These include the Latent Dirichlet Allocation ( LDA ) model of Blei et al.
Related work
(2007) integrate a model of random walks on the WordNet graph into an LDA topic model to build an unsupervised word sense disambiguation system.
Related work
and Lapata (2009) adapt the basic LDA model for application to unsupervised word sense induction; in this context, the topics learned by the model are assumed to correspond to distinct senses of a particular lemma.
Three selectional preference models
As noted above, LDA was originally introduced to model sets of documents in terms of topics, or clusters of terms, that they share in varying proportions.
Three selectional preference models
The high-level “generative story” for the LDA selectional preference model is as follows:
Three selectional preference models
(2009) for LDA .
LDA is mentioned in 29 sentences in this paper.
Topics mentioned in this paper:
Celikyilmaz, Asli and Hakkani-Tur, Dilek
Background and Motivation
A hierarchical model is particularly appealing to summarization than a ”flat” model, e. g. LDA (Blei et al., 2003b), in that one can discover ”abstract” and ”specific” topics.
Experiments and Discussions
* HbeSum (Hybrid Flat Summarizer): To investigate the performance of hierarchical topic model, we build another hybrid model using flat LDA (Blei et al., 2003b).
Experiments and Discussions
In LDA each sentence is a superposition of all K topics with sentence specific weights, there is no hierarchical relation between topics.
Experiments and Discussions
Instead of the new tree-based sentence scoring (§ 4), we present a similar method using topics from LDA on sentence level.
Introduction
We present a probabilistic topic model on sentence level building on hierarchical Latent Dirichlet Allocation (hLDA) (Blei et al., 2003a), which is a generalization of LDA (Blei et al., 2003b).
LDA is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Ritter, Alan and Mausam and Etzioni, Oren
Introduction
Unsupervised topic models, such as latent Dirichlet allocation ( LDA ) (Blei et al., 2003) and its variants are characterized by a set of hidden topics, which represent the underlying semantic structure of a document collection.
Introduction
In particular, our system, called LDA-SP, uses LinkLDA (Erosheva et al., 2004), an extension of LDA that simultaneously models two sets of distributions for each topic.
Previous Work
Topic models such as LDA (Blei et al., 2003) and its variants have recently begun to see use in many NLP applications such as summarization (Daume III and Marcu, 2006), document alignment and segmentation (Chen et al., 2009), and inferring class-attribute hierarchies (Reisinger and Pasca, 2009).
Previous Work
Van Durme and Gildea (2009) proposed applying LDA to general knowledge templates extracted using the KNEXT system (Schubert and Tong, 2003).
Topic Models for Selectional Prefs.
We first describe the straightforward application of LDA to modeling our corpus of extracted relations.
Topic Models for Selectional Prefs.
In this case two separate LDA models are used to model a1 and a2 independently.
Topic Models for Selectional Prefs.
Formally, LDA generates each argument in the corpus of relations as follows:
LDA is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Feng, Yansong and Lapata, Mirella
Image Annotation
Latent Dirichlet Allocation ( LDA , Blei et al.
Image Annotation
The basic idea underlying LDA , and topic models in general, is that each document is composed of a probability distribution over topics, where each topic represents a probability distribution over words.
Image Annotation
Examples include PLSA-based approaches to image annotation (e.g., Monay and Gatica-Perez 2007) and correspondence LDA (Blei and Jordan, 2003).
LDA is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Li, Linlin and Roth, Benjamin and Sporleder, Caroline
Related Work
(2007), for example, use LDA to capture global context.
Related Work
(2007) enhance the basic LDA algorithm by incorporating WordNet senses as an additional latent variable.
The Sense Disambiguation Model
LDA is a Bayesian version of this framework with Dirichlet hyper-parameters (Blei et al., 2003).
LDA is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: