Index of papers in Proc. ACL 2012 that mention
  • latent variables
Wang, William Yang and Mayfield, Elijah and Naidu, Suresh and Dittmar, Jeremiah
Abstract
We propose a latent variable model to enhance historical analysis of large corpora.
Introduction
Latent variable models, such as latent Dirichlet allocation (LDA) (Blei et al., 2003) and probabilistic latent semantic analysis (PLSA) (Hofmann, 1999), have been used in the past to facilitate social science research.
Introduction
To do this we augment SAGE with two sparse latent variables that model the region and time of a document, as well as a third sparse latent
Introduction
variable that captures the interactions among the region, time and topic latent variables .
Related Work
For example, SVM does not have latent variables to model the subtle differences and interactions of features from different domains (e.g.
Related Work
(2010) use a latent variable model to predict geolocation information of Twitter users, and investigate geographic variations of language use.
The Sparse Mixed-Effects Model
It also incorporates latent variables 7' to model the variance for each sparse deviation 77.
The Sparse Mixed-Effects Model
The three major sparse deviation latent variables are (T) (R) (Q)
The Sparse Mixed-Effects Model
All of the three latent variables are condi-
latent variables is mentioned in 17 sentences in this paper.
Topics mentioned in this paper:
Guo, Weiwei and Diab, Mona
Abstract
In this paper, we show that by carefully handling words that are not in the sentences (missing words), we can train a reliable latent variable model on sentences.
Abstract
Experiments on the new task and previous data sets show significant improvement of our model over baselines and other traditional latent variable models.
Experiments and Results
All the latent variable models (LSA, LDA, WTMF) are built on the same set of corpus: WN+Wik+Brown (393, 666 sentences and 4, 262, 026 words).
Experiments and Results
In these latent variable models, there are several essential parameters: weight of missing words wm, and dimension K. Figure 2 and 3 analyze the impact of these parameters on ATOPteSt.
Introduction
Latent variable models, such as Latent Semantic Analysis [LSA] (Landauer et al., 1998), Probabilistic Latent Semantic Analysis [PLSA] (Hofmann, 1999), Latent Dirichlet Allocation [LDA] (Blei et al., 2003) can solve the two issues naturally by modeling the semantics of words and sentences simultaneously in the low-dimensional latent space.
Introduction
After analyzing the way traditional latent variable models (LSA, PLSNLDA) handle missing words, we decide to model sentences using a weighted matrix factorization approach (Srebro and J aakkola, 2003), which allows us to treat observed words and missing words differently.
Limitations of Topic Models and LSA for Modeling Sentences
Usually latent variable models aim to find a latent semantic profile for a sentence that is most relevant to the observed words.
latent variables is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Shindo, Hiroyuki and Miyao, Yusuke and Fujino, Akinori and Nagata, Masaaki
Experiment
From this viewpoint, TSG utilizes surrounding symbols (NNP of NPNNP in the above example) as latent variables with which to capture context information.
Experiment
as latent variables and the search space is larger than that of a TSG when the symbol refinement model allows for more than two subcategories for each symbol.
Experiment
Our experimental results comfirm that jointly modeling both latent variables using our SR-TSG assists accurate parsing.
Inference
The inference of the SR-TSG derivations corresponds to inferring two kinds of latent variables : latent symbol subcategories and latent substitution
Inference
This stepwise learning is simple and efficient in practice, but we believe that the joint learning of both latent variables is possible, and we will deal with this in future work.
Inference
This sampler simultaneously updates blocks of latent variables associated with a sentence, thus it can find MAP solutions efficiently.
latent variables is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Nguyen, Viet-An and Boyd-Graber, Jordan and Resnik, Philip
Inference
To find the latent variables that best explain observed data, we use Gibbs sampling, a widely used Markov chain Monte Carlo inference technique (Neal, 2000; Resnik and Hardisty, 2010).
Inference
The state space is latent variables for topic indices assigned to all tokens z = {ZQM} and topic shifts assigned to turns 1 2 {lat}.
Inference
We marginalize over all other latent variables .
Modeling Multiparty Discussions
Instead, we endow each turn with a binary latent variable lat, called the topic shift.
Modeling Multiparty Discussions
This latent variable signifies whether the speaker changed the topic of the conversation.
Related and Future Work
as a distinct latent variable (Wang and McCallum, 2006; Eisenstein et a1., 2010).
latent variables is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Sun, Xu and Wang, Houfeng and Li, Wenjie
Introduction
While most of the state-of-the-art CWS systems used semi-Markov conditional random fields or latent variable conditional random fields, we simply use a single first-order conditional random fields (CRFs) for the joint modeling.
Introduction
The semi-Markov CRFs and latent variable CRFs relax the Markov assumption of CRFs to express more complicated dependencies, and therefore to achieve higher disambiguation power.
Related Work
To achieve high accuracy, most of the state-of-the-art systems are heavy probabilistic systems using semi-Markov assumptions or latent variables (Andrew, 2006; Sun et al., 2009b).
Related Work
For example, one of the state-of-the-art CWS system is the latent variable conditional random field (Sun et al., 2008; Sun and Tsujii, 2009) system presented in Sun et al.
Related Work
Those semi-Markov perceptron systems are moderately faster than the heavy probabilistic systems using semi-Markov conditional random fields or latent variable conditional random fields.
latent variables is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Lee, Chia-ying and Glass, James
Introduction
In contrast to the previous methods, we approach the problem by modeling the three sub-problems as well as the unknown set of sub-word units as latent variables in one nonparametric Bayesian model.
Model
In the next section, we show how to infer the value of each of the latent variables in Fig.
Problem Formulation
We model the three subtasks as latent variables in our approach.
Problem Formulation
In this section, we describe the observed data, latent variables , and auxiliary variables
Related Work
For the domain our problem is applied to, our model has to include more latent variables and is more complex.
latent variables is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Druck, Gregory and Pang, Bo
Models
To identify refinements without labeled data, we propose a generative model of reviews (or more generally documents) with latent variables .
Models
Finally, although we motivated including the review-level latent variable y as a way to improve segment-level prediction of 2, note that predictions of y are useful in and of themselves.
Models
over latent variables using the sum-product algorithm (Koller and Friedman, 2009).
latent variables is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: