Index of papers in Proc. ACL 2010 that mention
  • topic distributions
Li, Linlin and Roth, Benjamin and Sporleder, Caroline
Conclusion
The basic idea of these models is to compare the topic distribution of a target instance with the candidate sense paraphrases and choose the most probable one.
Experiments
We then compare the topic distributions of literal and nonliteral senses.
Experiments
As the topic distribution of nouns and verbs exhibit different properties, topic comparisons across parts-of-speech do not make sense.
Experiments
make the topic distributions comparable by making sure each type of paraphrase contains the same sets of parts-of-speech.
Introduction
In this paper, we propose a novel framework which is fairly resource-poor in that it requires only 1) a large unlabelled corpus from which to estimate the topics distributions , and 2) paraphrases for the possible target senses.
Related Work
In addition to generating a topic from the document’s topic distribution and sampling a word from that topic, the enhanced model also generates a distributional neighbour for the chosen word and then assigns a sense based on the word, its neighbour and the topic.
The Sense Disambiguation Model
A similar topic distribution to that of the individual words ‘norm’ or ‘trouble’ would be strong supporting evidence of the corresponding idiomatic reading.).
topic distributions is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Celikyilmaz, Asli and Hakkani-Tur, Dilek
Background and Motivation
An alternative yet feasible solution, presented in this work, is building a model that can summarize new document clusters using characteristics of topic distributions of training documents.
Introduction
Such models can yield comparable or better performance on DUC and other evaluations, since representing documents as topic distributions rather than bags of words diminishes the effect of lexical variability.
Introduction
Our focus is on identifying similarities of candidate sentences to summary sentences using a novel tree based sentence scoring algorithm, concerning topic distributions at different levels of the discovered hierarchy as described in § 3 and § 4,
Summary-Focused Hierarchical Model
We discover hidden topic distributions of sentences in a given document cluster along with provided summary sentences based on hLDA described in (Blei et al., 2003a)1.
Summary-Focused Hierarchical Model
We build a summary-focused hierarchical probabilistic topic model, sumHLDA, for each document cluster at sentence level, because it enables capturing expected topic distributions in given sentences directly from the model.
Summary-Focused Hierarchical Model
Each node is associated with a topic distribution over words.
topic distributions is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Feng, Yansong and Lapata, Mirella
Extractive Caption Generation
age and a sentence can be broadly measured by the extent to which they share the same topic distributions (Steyvers and Griffiths, 2007).
Extractive Caption Generation
K 1309,61): 2 pjlogz Ii (4) 1:1 ‘11 where p and q are shorthand for the image topic distribution PdMix and sentence topic distribution PSd, respectively.
Image Annotation
The image annotation model takes the topic distributions into account when finding the most likely keywords for an image and its associated document.
topic distributions is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Ritter, Alan and Mausam and Etzioni, Oren
Abstract
By simultaneously inferring latent topics and topic distributions over relations, LDA-SP combines the benefits of previous approaches: like traditional class-based approaches, it produces human-interpretable classes describing each relation’s preferences, but it is competitive with non-class-based methods in predictive power.
Topic Models for Selectional Prefs.
ing related topic pairs between arguments we employ a sparse prior over the per-relation topic distributions .
Topic Models for Selectional Prefs.
Finally we note that, once a topic distribution has been learned over a set of training relations, one can efficiently apply inference to unseen relations (Yao et al., 2009).
topic distributions is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: