Index of papers in Proc. ACL 2014 that mention
  • latent variables
Parikh, Ankur P. and Cohen, Shay B. and Xing, Eric P.
Abstract
We associate each sentence with an undirected latent tree graphical model, which is a tree consisting of both observed variables (corresponding to the words in the sentence) and an additional set of latent variables that are unobserved in the data.
Abstract
However, due to the presence of latent variables , structure learning of latent trees is substantially more complicated than in observed models.
Abstract
The latent variables can incorporate various linguistic properties, such as head information, valence of dependency being generated, and so on.
latent variables is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Abend, Omri and Cohen, Shay B. and Steedman, Mark
Background and Related Work
Our work proposes a uniform treatment to MWPs of varying degrees of compositionality, and avoids defining MWPs explicitly by modelling their LCs as latent variables .
Introduction
We present a novel approach to the task that models the selection and relative weighting of the predicate’s LCs using latent variables .
Our Proposal: A Latent LC Approach
We address the task with a latent variable log-linear model, representing the LCs of the predicates.
Our Proposal: A Latent LC Approach
We choose this model for its generality, conceptual simplicity, and because it allows to easily incorporate various feature sets and sets of latent variables .
Our Proposal: A Latent LC Approach
The introduction of latent variables into the log-linear model leads to a non-convex objective function.
latent variables is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Nguyen, Thang and Hu, Yuening and Boyd-Graber, Jordan
Conclusion
These regularizations could improve spectral algorithms for latent variables models, improving the performance for other NLP tasks such as latent variable PCFGs (Cohen et al., 2013) and HMMs (Anandkumar et al., 2012), combining the flexibility and robustness offered by priors with the speed and accuracy of new, scalable algorithms.
Introduction
Theoretically, their latent variable formulation has served as a foundation for more robust models of other linguistic phenomena (Brody and Lapata, 2009).
Introduction
Modern topic models are formulated as a latent variable model.
Introduction
Typical solutions use MCMC (Griffiths and Steyvers, 2004) or variational EM (Blei et al., 2003), which can be viewed as local optimization: searching for the latent variables that maximize the data likelihood.
latent variables is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Hu, Yuening and Zhai, Ke and Eidelman, Vladimir and Boyd-Graber, Jordan
Inference
Inference of probabilistic models discovers the posterior distribution over latent variables .
Inference
For a collection of D documents, each of which contains Nd number of words, the latent variables of ptLDA are: transition distributions 71',“- for every topic k and internal node i in the prior tree structure; multinomial distributions over topics 6d for every document d; topic assignments zdn and path ydn for the nth word wdn in document d. The joint distribution of polylingual tree-based topic models is
Inference
proximate posterior inference to discover the latent variables that best explain our data.
latent variables is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Pershina, Maria and Min, Bonan and Xu, Wei and Grishman, Ralph
Guided DS
We introduce a set of latent variables hi which model human ground truth for each mention in the ith bag and take precedence over the current model assignment zi.
Guided DS
o ZijER U NR: a latent variable that denotes the relation of the jth mention in the ith bag
Guided DS
0 hij E R U NR: a latent variable that denotes the refined relation of the mention xij
Introduction
(2012), we generalize the labeled data through feature selection and model this additional information directly in the latent variable approaches.
The Challenge
Instead we propose to perform feature selection to generalize human labeled data into training guidelines, and integrate them into latent variable model.
latent variables is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Bamman, David and Underwood, Ted and Smith, Noah A.
Experiments
As a Baseline, we also evaluate all hypotheses on a model with no latent variables whatsoever, which instead measures similarity as the average J S divergence between the empirical word distributions over each role type.
Experiments
Table 1 presents the results of this comparison; for all models with latent variables , we report the average of 5 sampling runs with different random initializations.
Model
Observed variables are shaded, latent variables are clear, and collapsed variables are dotted.
latent variables is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Berant, Jonathan and Liang, Percy
Model overview
Many existing paraphrase models introduce latent variables to describe the derivation of c from :c, e.g., with transformations (Heilman and Smith, 2010; Stern and Dagan, 2011) or alignments (Haghighi et al., 2005; Das and Smith, 2009; Chang et al., 2010).
Model overview
However, we opt for a simpler paraphrase model without latent variables in the interest of efficiency.
Paraphrasing
The NLP paraphrase literature is vast and ranges from simple methods employing surface features (Wan et al., 2006), through vector space models (Socher et al., 2011), to latent variable models (Das and Smith, 2009; Wang and Manning, 2010; Stern and Dagan, 2011).
latent variables is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Chaturvedi, Snigdha and Goldwasser, Dan and Daumé III, Hal
Intervention Prediction Models
pi, 7“ and gb(t) are observed and hi are the latent variables .
Intervention Prediction Models
In the first step, it determines the latent variable assignments for positive examples.
Intervention Prediction Models
Once this process converges for negative examples, the algorithm reassigns values to the latent variables for positive examples, and proceeds to the second step.
latent variables is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Cohen, Shay B. and Collins, Michael
Introduction
This matrix form has clear relevance to latent variable models.
Related Work
Recently a number of researchers have developed provably correct algorithms for parameter estimation in latent variable models such as hidden Markov models, topic models, directed graphical models with latent variables , and so on (Hsu et al., 2009; Bailly et al., 2010; Siddiqi et al., 2010; Parikh et al., 2011; Balle et al., 2011; Arora et al., 2013; Dhillon et al., 2012; Anandkumar et al., 2012; Arora et al., 2012; Arora et al., 2013).
The Learning Algorithm for L-PCFGS
The training set does not include values for the latent variables ; this is the main challenge in learning.
latent variables is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Hall, David and Berg-Kirkpatrick, Taylor and Klein, Dan
Minimum Bayes risk parsing
MBR parsing has proven especially useful in latent variable grammars.
Minimum Bayes risk parsing
Petrov and Klein (2007) showed that MBR trees substantially improved performance over Viterbi parses for latent variable grammars, earning up to 1.5Fl.
Sparsity and CPUs
For instance, in a latent variable parser, the coarse grammar would have symbols like NP, VP, etc., and the fine pass would have refined symbols N P0, N P1, VP4, and so on.
latent variables is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Hovy, Dirk
Extending the Model
By adding additional transitions, we can constrain the latent variables further.
Introduction
(2011) proposed an approach that uses co-occurrence patterns to find entity type candidates, and then learns their applicability to relation arguments by using them as latent variables in a first-order HMM.
Model
Thus all common nouns are possible types, and can be used as latent variables in an HMM.
latent variables is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kushman, Nate and Artzi, Yoav and Zettlemoyer, Luke and Barzilay, Regina
Introduction
In both cases, the available labeled equations (either the seed set, or the full set) are abstracted to provide the model’s equation templates, while the slot filling and alignment decisions are latent variables whose settings are estimated by directly optimizing the marginal data log-likelihood.
Mapping Word Problems to Equations
In this way, the distribution over derivations 3/ is modeled as a latent variable .
Related Work
In our approach, systems of equations are relatively easy to specify, providing a type of template structure, and the alignment of the slots in these templates to the text is modeled primarily with latent variables during learning.
latent variables is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: