Index of papers in Proc. ACL 2010 that mention
  • generative model
Feng, Yansong and Lapata, Mirella
Abstract
Inspired by recent work in summarization, we propose extractive and abstractive caption generation models .
Abstractive Caption Generation
Despite its simplicity, the caption generation model in (7) has a major drawback.
Abstractive Caption Generation
After integrating the attachment probabilities into equation (12), the caption generation model becomes:
Conclusions
Rather than adopting a two-stage approach, where the image processing and caption generation are carried out sequentially, a more general model should integrate the two steps in a unified framework.
Experimental Setup
In this section we discuss our experimental design for assessing the performance of the caption generation models presented above.
Image Annotation
It is important to note that the caption generation models we propose are not especially tied
generative model is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Lin, Shih-Hsiang and Chen, Berlin
Proposed Methods
There are many ways to construct the above mentioned three componen mod ls, i.e., the sentence generative model FED | 513 , the sentence prior model P(Sj), and the loss function L(S,.,Sj).
Proposed Methods
4.1 Sentence generative model
Proposed Methods
In the LM approach, each sentence in a document can be simply regarded as a probabilistic generative model consisting of a unigram distribution (the so-called “bag-0f-words” assumption) for generating the document (Chen et al., 2009): (w)
generative model is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Hall, David and Klein, Dan
Conclusion
We presented a new generative model of word lists that automatically finds cognate groups from scrambled vocabulary lists.
Introduction
In this paper, we present a new generative model for the automatic induction of cognate groups given only (1) a known family tree of languages and (2) word lists from those languages.
Model
In this section, we describe a new generative model for vocabulary lists in multiple related languages given the phylogenetic relationship between the languages (their family tree).
Model
Figure 1(a) graphically describes our generative model for three Romance languages: Italian, Portuguese, and Spanish.1 In each cognate group, each word W5 is generated from its parent according to a conditional distribution with parameter (pg, which is specific to that edge in the tree, but shared between all cognate groups.
generative model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Celikyilmaz, Asli and Hakkani-Tur, Dilek
Abstract
In this paper, we formulate extractive summarization as a two step learning problem building a generative model for pattern discovery and a regression model for inference.
Background and Motivation
Our approach differs from the early work, in that, we combine a generative hierarchical model and regression model to score sentences in new documents, eliminating the need for building a generative model for new document clusters.
Introduction
In this paper, we present a novel approach that formulates MDS as a prediction problem based on a two-step hybrid model: a generative model for hierarchical topic discovery and a regression model for inference.
generative model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Ó Séaghdha, Diarmuid
Conclusions and future work
Another potential direction for system improvement would be an integration of our generative model with Bergsma et al.’s (2008) discriminative model — this could be done in a number of ways, including using the induced classes of a topic model as features for a discriminative classifier or using the discriminative classifier to produce additional high-quality training data from noisy unparsed text.
Introduction
Advantages of these models include a well-defined generative model that handles sparse data well, the ability to jointly induce semantic classes and predicate-specific distributions over those classes, and the enhanced statistical strength achieved by sharing knowledge across predicates.
Results
For frequent predicate-argument pairs (Seen datasets), Web counts are clearly better; however, the BNC counts are unambiguously superior to LDA and ROOTH-LDA (whose predictions are based entirely on the generative model even for observed items) for the Seen verb-object data only.
generative model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Ritter, Alan and Mausam and Etzioni, Oren
Experiments
These similarity measures were shown to outperform the generative model of Rooth et al.
Previous Work
On the other hand, generative models produce complete probability distributions of the data, and hence can be integrated with other systems and tasks in a more principled manner (see Sections 4.2.2 and 4.3.1).
Topic Models for Selectional Prefs.
In the generative model for our data, each relation T has a corresponding multinomial over topics 67., drawn from a Dirichlet.
generative model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Wuebker, Joern and Mauser, Arne and Ney, Hermann
Experimental Evaluation
To investigate the generative models , we replace the two phrase translation probabilities and keep the other features identical to the baseline.
Phrase Model Training
Additionally we consider smoothing by different kinds of interpolation of the generative model with the state-of-the-art heuristics.
Related Work
For a generative model , (DeNero et al., 2006) gave a detailed analysis of the challenges and arising problems.
generative model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: