Index of papers in Proc. ACL 2013 that mention
  • generative model
Kim, Joohyun and Mooney, Raymond
Background
The baseline generative model we use for reranking employs the unsupervised PCFG induction approach introduced by Kim and Mooney (2012).
Background
Our proposed reranking model is used to discriminatively reorder the top parses produced by this generative model .
Introduction
Since their system employs a generative model , discriminative reranking (Collins, 2000) could p0-tentially improve its performance.
Introduction
By training a discriminative classifier that uses global features of complete parses to identify correct interpretations, a reranker can significantly improve the accuracy of a generative model .
Modified Reranking Algorithm
In reranking, a baseline generative model is first trained and generates a set of candidate outputs for each training example.
Modified Reranking Algorithm
The approach requires three subcomponents: l) a GEN function that returns the list of top n candidate parse trees for each NL sentence produced by the generative model , 2) a feature function (I) that maps a NL sentence, 6, and a parse tree, y, into a real-valued feature vector (19(6, 3/) 6 Rd, and 3) a reference parse tree that is compared to the highest-scoring parse tree during training.
Related Work
Discriminative reranking is a common machine learning technique to improve the output of generative models .
Reranking Features
This section describes the features (I) extracted from parses produced by the generative model and used to rerank the candidates.
Reranking Features
Certainty assigned by the base generative model .
generative model is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Berg-Kirkpatrick, Taylor and Durrett, Greg and Klein, Dan
Introduction
We present a new, generative model specialized to transcribing printing-press era documents.
Model
We take a generative modeling approach inspired by the overall structure of the historical printing process.
Model
Our generative model , which is depicted in Figure 3, reflects this process.
Related Work
In the NLP community, generative models have been developed specifically for correcting outputs of OCR systems (Kolak et al., 2003), but these do not deal directly with images.
Results and Analysis
As noted earlier, one strength of our generative model is that we can make the values of certain pixels unobserved in the model, and let inference fill them in.
generative model is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Börschinger, Benjamin and Johnson, Mark and Demuth, Katherine
Background and related work
They propose a pipeline architecture involving two separate generative models , one for word-segmentation and one for phonological variation.
Background and related work
This permits us to develop a joint generative model for both word segmentation and variation which we plan to extend to handle more phenomena in future work.
Conclusion and outlook
A major advantage of our generative model is the ease and transparency with which its assumptions can be modified and extended.
The computational model
One of the advantages of an explicitly defined generative model such as ours is that it is straightforward to gradually extend it by adding more cues, as we point out in the discussion.
generative model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Gormley, Matthew R. and Eisner, Jason
Abstract
As an illustrative case, we study a generative model for dependency parsing.
Discussion
In principle, our branch-and-bound method can approach e-optimal solutions to Viterbi training of locally normalized generative models , including the NP-hard case of grammar induction with the DMV.
The Constrained Optimization Task
Other locally normalized log-linear generative models (Berg-Kirkpatrick et al., 2010) would have a similar formulation.
The Constrained Optimization Task
This generative model defines a joint distribution over the sentences and their dependency trees.
generative model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Lei, Tao and Long, Fan and Barzilay, Regina and Rinard, Martin
Abstract
We use a Bayesian generative model to capture relevant natural language phenomena and translate the English specification into a specification tree, which is then translated into a C++ input parser.
Model
We combine these two kinds of information into a Bayesian generative model in which the code quality of the specification tree is captured by the prior probability P (t) and the feature observations are encoded in the likelihood probability P (w|t).
Model
We assume the generative model operates by first generating the model parameters from a set of Dirichlet distributions.
Model
0 Generating Model Parameters: For every pair of feature type f and phrase tag 2, draw a multinomial distribution parameter 63 from a Dirichlet prior P(6§;).
generative model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Mukherjee, Arjun and Liu, Bing
Conclusion
A novel technique was also proposed to rank n-gram phrases where relevance based ranking was used in conjunction with a semi-supervised generative model .
Introduction
We employ a semi-supervised generative model called JTE-P to jointly model AD-expressions, pair interactions, and discussion topics simultaneously in a single framework.
Model
JTE-P is a semi-supervised generative model motivated by the joint occurrence of expression types (agreement and disagreement), topics in discussion posts, and user pairwise interactions.
Model
Like most generative models for text, a post (document) is viewed as a bag of n-grams and each n-gram (word/phrase) takes one value from a predefined vocabulary.
generative model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Sakaguchi, Keisuke and Arase, Yuki and Komachi, Mamoru
Evaluation with Native-Speakers
With respect to H, our discriminative models achieve from 0.12 to 0.2 higher agreement than baselines, indicating that the discriminative models can generate sound distractors more effectively than generative models .
Evaluation with Native-Speakers
The lower H on generative models may be because the distractors are semantically too close to the target (correct answer) as following examples:
Evaluation with Native-Speakers
As a result, the quiz from generative models is not reliable since both published and issued are correct.
Proposed Method
We rank the candidates by a generative model to consider the surrounding context (e.g.
generative model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Cheung, Jackie Chi Kit and Penn, Gerald
Abstract
We introduce Distributional Semantic Hidden Markov Models, a novel variant of a hidden Markov model that integrates these two approaches by incorporating contextualized distributional semantic vectors into a generative model as observed emissions.
Introduction
Second, the contextualization process allows the semantic vectors to implicitly encode disambiguated word sense and syntactic information, without further adding to the complexity of the generative model .
Related Work
Other related generative models include topic models and structured versions thereof (Blei et al., 2003; Gruber et al., 2007; Wallach, 2008).
generative model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Metallinou, Angeliki and Bohus, Dan and Williams, Jason
Generative state tracking
In contrast to generative models , discriminative approaches to dialog state tracking directly predict the correct state hypothesis by leveraging discrim-inatively trained conditional models of the form (9(9) 2 P(g| f), where f are features extracted from various sources, e.g.
Introduction
(2010); Thomson and Young (2010)) use generative models that capture how the SLU results are generated from hidden dialog states.
Introduction
As an illustration, in Figure 1, a generative model might fail to assign the highest score to the correct hypothesis (61C) after the second turn.
generative model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: