Abbreviator with Nonlocal Information | 2.1 A Latent Variable Abbreviator |
Abbreviator with Nonlocal Information | To implicitly incorporate nonlocal information, we propose discriminative probabilistic latent variable models (DPLVMs) (Morency et al., 2007; Petrov and Klein, 2008) for abbreviating terms. |
Abbreviator with Nonlocal Information | The DPLVM is a natural extension of the CRF model (see Figure 2), which is a special case of the DPLVM, with only one latent variable assigned for each label. |
Abstract | First, in order to incorporate nonlocal information into abbreviation generation tasks, we present both implicit and explicit solutions: the latent variable model, or alternatively, the label encoding approach with global information. |
Introduction | Variables at, y, and h represent observation, label, and latent variables , respectively. |
Introduction | discriminative probabilistic latent variable model (DPLVM) in which nonlocal information is modeled by latent variables . |
Experiments | The entropy of g on a single latent variable 2 is defined to be H (g, z) é — 2666 P(c|z) log2 P(c|z), where C is the class |
Image Clustering with Annotated Auxiliary Data | In order to unify those two separate PLSA models, these two steps are done simultaneously with common latent variables used as a bridge linking them. |
Image Clustering with Annotated Auxiliary Data | Through these common latent variables , which are now constrained by both target image data and auxiliary annotation data, a better clustering result is expected for the target data. |
Image Clustering with Annotated Auxiliary Data | Let Z = be the latent variable set in our aPLSA model. |
Related Work | Sparsity for low-order contexts has recently spurred interest in using latent variables to represent distributions over contexts in language models. |
Related Work | Several authors investigate neural network models that learn not just one latent state, but rather a vector of latent variables , to represent each word in a language model (Bengio et al., 2003; Emami et al., 2003; Morin and Bengio, 2005). |
Smoothing Natural Language Sequences | 2.3 Latent Variable Language Model Representation |
Smoothing Natural Language Sequences | Latent variable language models (LVLMs) can be used to produce just such a distributional representation. |
Smoothing Natural Language Sequences | We use Hidden Markov Models (HMMs) as the main example in the discussion and as the LVLMs in our experiments, but the smoothing technique can be generalized to other forms of LVLMs, such as factorial HMMs and latent variable maximum entropy models (Ghahramani and Jordan, 1997; Smith and Eisner, 2005). |
Background | (2008) present a latent variable model that describes the relationship between translation and derivation clearly. |
Background | Although originally proposed for supporting large sets of nonindependent and overlapping features, the latent variable model is actually a more general form of conventional linear model (Och and Ney, 2002). |
Background | Accordingly, decoding for the latent variable model can be formalized as |
Related Work | They show that max-translation decoding outperforms max-derivation decoding for the latent variable model. |