Index of papers in Proc. ACL 2009 that mention
  • LDA
Reisinger, Joseph and Pasca, Marius
Hierarchical Topic Models 3.1 Latent Dirichlet Allocation
The underlying mechanism for our annotation procedure is LDA (Blei et al., 2003b), a fully Bayesian extension of probabilistic Latent Semantic Analysis (Hofmann, 1999).
Hierarchical Topic Models 3.1 Latent Dirichlet Allocation
Given D labeled attribute sets wd, d E D, LDA infers an unstructured set of T latent annotated concepts over which attribute sets decompose as mixtures.2 The latent annotated concepts represent semantically coherent groups of attributes expressed in the data, as shown in Example 1.
Hierarchical Topic Models 3.1 Latent Dirichlet Allocation
The generative model for LDA is given by
Introduction
In this paper, we show that both of these goals can be realized jointly using a probabilistic topic model, namely hierarchical Latent Dirichlet Allocation ( LDA ) (Blei et al., 2003b).
Introduction
There are three main advantages to using a topic model as the annotation procedure: (1) Unlike hierarchical clustering (Duda et al., 2000), the attribute distribution at a concept node is not composed of the distributions of its children; attributes found specific to the concept Painter would not need to appear in the distribution of attributes for Person, making the internal distributions at each concept more meaningful as attributes specific to that concept; (2) Since LDA is fully Bayesian, its model semantics allow additional prior information to be included, unlike standard models such as Latent Semantic Analysis (Hofmann, 1999), improving annotation precision; (3) Attributes with multiple related meanings (i.e., polysemous attributes) are modeled implicitly: if an attribute (e.g., “style”) occurs in two separate input classes (e.g., poets and car models), then that attribute might attach at two different concepts in the ontology, which is better than attaching it at their most specific common ancestor (Whole) if that ancestor is too general to be useful.
Introduction
ate three variants: (1) a fixed structure approach where each flat class is attached to WN using a simple string-matching heuristic, and concept nodes are annotated using LDA, (2) an extension of LDA allowing for sense selection in addition to annotation, and (3) an approach employing a nonparametric prior over tree structures capable of inferring arbitrary ontologies.
Ontology Annotation
LDA Fixed Structure LDA nCRP
Ontology Annotation
Figure 2: Graphical models for the LDA variants; shaded nodes indicate observed quantities.
Ontology Annotation
We propose a set of Bayesian generative models based on LDA that take as input labeled attribute sets generated using an extraction procedure such as the above and organize the attributes in WN according to their level of generality.
LDA is mentioned in 31 sentences in this paper.
Topics mentioned in this paper:
Zhang, Huibin and Zhu, Mingjie and Shi, Shuming and Wen, Ji-Rong
Experiments
LDA: Our approach with LDA as the topic model.
Experiments
The implementation of LDA is based on Blei’s code of variational EM for LDAS .
Experiments
Table 4 shows the average query processing time and results quality of the LDA approach, by varying frequency threshold h. Similar results are observed for the pLSI approach.
Our Approach
And at the same time, one document could be related to multiple topics in some topic models (e.g., pLSI and LDA ).
Our Approach
Here we use LDA as an example to
Our Approach
According to the assumption of LDA and our concept mapping in Table 3, a RASC (“document”) is viewed as a mixture of hidden semantic classes (“topics”).
Topic Models
1 Z LDA (Blei et al., 2003): In LDA , the topic mixture is drawn from a conjugate Dirichlet prior that remains the same for all documents (Figure l).
Topic Models
Figure l. Graphical model representation of LDA , from Blei et a1.
LDA is mentioned in 14 sentences in this paper.
Topics mentioned in this paper: