Index of papers in Proc. ACL 2009 that mention
  • latent variable
Sun, Xu and Okazaki, Naoaki and Tsujii, Jun'ichi
Abbreviator with Nonlocal Information
2.1 A Latent Variable Abbreviator
Abbreviator with Nonlocal Information
To implicitly incorporate nonlocal information, we propose discriminative probabilistic latent variable models (DPLVMs) (Morency et al., 2007; Petrov and Klein, 2008) for abbreviating terms.
Abbreviator with Nonlocal Information
The DPLVM is a natural extension of the CRF model (see Figure 2), which is a special case of the DPLVM, with only one latent variable assigned for each label.
Abstract
First, in order to incorporate nonlocal information into abbreviation generation tasks, we present both implicit and explicit solutions: the latent variable model, or alternatively, the label encoding approach with global information.
Introduction
Variables at, y, and h represent observation, label, and latent variables , respectively.
Introduction
discriminative probabilistic latent variable model (DPLVM) in which nonlocal information is modeled by latent variables .
latent variable is mentioned in 24 sentences in this paper.
Topics mentioned in this paper:
Yang, Qiang and Chen, Yuqiang and Xue, Gui-Rong and Dai, Wenyuan and Yu, Yong
Experiments
The entropy of g on a single latent variable 2 is defined to be H (g, z) é — 2666 P(c|z) log2 P(c|z), where C is the class
Image Clustering with Annotated Auxiliary Data
In order to unify those two separate PLSA models, these two steps are done simultaneously with common latent variables used as a bridge linking them.
Image Clustering with Annotated Auxiliary Data
Through these common latent variables , which are now constrained by both target image data and auxiliary annotation data, a better clustering result is expected for the target data.
Image Clustering with Annotated Auxiliary Data
Let Z = be the latent variable set in our aPLSA model.
latent variable is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Huang, Fei and Yates, Alexander
Related Work
Sparsity for low-order contexts has recently spurred interest in using latent variables to represent distributions over contexts in language models.
Related Work
Several authors investigate neural network models that learn not just one latent state, but rather a vector of latent variables , to represent each word in a language model (Bengio et al., 2003; Emami et al., 2003; Morin and Bengio, 2005).
Smoothing Natural Language Sequences
2.3 Latent Variable Language Model Representation
Smoothing Natural Language Sequences
Latent variable language models (LVLMs) can be used to produce just such a distributional representation.
Smoothing Natural Language Sequences
We use Hidden Markov Models (HMMs) as the main example in the discussion and as the LVLMs in our experiments, but the smoothing technique can be generalized to other forms of LVLMs, such as factorial HMMs and latent variable maximum entropy models (Ghahramani and Jordan, 1997; Smith and Eisner, 2005).
latent variable is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Liu, Yang and Mi, Haitao and Feng, Yang and Liu, Qun
Background
(2008) present a latent variable model that describes the relationship between translation and derivation clearly.
Background
Although originally proposed for supporting large sets of nonindependent and overlapping features, the latent variable model is actually a more general form of conventional linear model (Och and Ney, 2002).
Background
Accordingly, decoding for the latent variable model can be formalized as
Related Work
They show that max-translation decoding outperforms max-derivation decoding for the latent variable model.
latent variable is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: