Index of papers in Proc. ACL 2010 that mention
  • latent variable
Ó Séaghdha, Diarmuid
Conclusions and future work
The models presented here derive their predictions by modelling predicate-argument plausibility through the intermediary of latent variables .
Conclusions and future work
We also anticipate that latent variable models will prove effective for learning selectional preferences of semantic predicates (e. g., FrameNet roles) where direct estimation from a large corpus is not a viable option.
Related work
In Rooth et al.’s model each observed predicate-argument pair is probabilistically generated from a latent variable , which is itself generated from an underlying distribution on variables.
Related work
The use of latent variables , which correspond to coherent clusters of predicate-argument interactions, allow probabilities to be assigned to predicate-argument pairs which have not previously been observed by the model.
Related work
The work presented in this paper is inspired by Rooth et al.’s latent variable approach, most directly in the model described in Section 3.3.
Results
Latent variable models that use EM for inference can be very sensitive to the number of latent variables chosen.
Three selectional preference models
Each model has at least one vocabulary of Z arbitrarily labelled latent variables .
Three selectional preference models
fzn is the number of observations where the latent variable 2 has been associated with the argument type n, fzv is the number of observations where 2 has been associated with the predicate type 2) and fzr is the number of observations where 2 has been associated with the relation 7“.
Three selectional preference models
In Rooth et al.’s (1999) selectional preference model, a latent variable is responsible for generating both the predicate and argument types of an observation.
latent variable is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Huang, Fei and Yates, Alexander
Introduction
An HMM is a generative probabilistic model that generates each word 5137; in the corpus conditioned on a latent variable Y}.
Introduction
Each Y; in the model takes on integral values from 1 to K, and each one is generated by the latent variable for the preceding word, Y};_1.
Introduction
In response, we introduce latent variable models of word spans, or sequences of words.
latent variable is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Snyder, Benjamin and Barzilay, Regina and Knight, Kevin
Inference
In order to do so, we need to integrate out all the other latent variables in our model.
Inference
To do so tractably, we use Gibbs sampling to draw each latent variable conditioned on our current sample of the others.
Inference
Even with a large number of sampling rounds, it is difficult to fully explore the latent variable space for complex unsupervised models.
latent variable is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: