Index of papers in Proc. ACL 2012 that mention
  • latent semantics
Guo, Weiwei and Diab, Mona
Abstract
Previous sentence similarity work finds that latent semantics approaches to the problem do not perform well due to insufficient information in single sentences.
Experiments and Results
This is because LDA only uses 10 observed words to infer a 100 dimension vector for a sentence, while WTMF takes advantage of much more missing words to learn more robust latent semantics vectors.
Introduction
Latent variable models, such as Latent Semantic Analysis [LSA] (Landauer et al., 1998), Probabilistic Latent Semantic Analysis [PLSA] (Hofmann, 1999), Latent Dirichlet Allocation [LDA] (Blei et al., 2003) can solve the two issues naturally by modeling the semantics of words and sentences simultaneously in the low-dimensional latent space.
Introduction
We believe that the latent semantics approaches applied to date to the SS problem have not yielded positive results due to the deficient modeling of the meymdwwmmMCwmeSSqwmanaww limited contextual setting where the sentences are typically very short to derive robust latent semantics .
Introduction
Apart from the SS setting, robust modeling of the latent semantics of short sentences/texts is becoming a pressing need due to the pervasive presence of more bursty data sets such as Twitter feeds and SMS where short contexts are an inherent characteristic of the data.
Limitations of Topic Models and LSA for Modeling Sentences
Usually latent variable models aim to find a latent semantic profile for a sentence that is most relevant to the observed words.
Limitations of Topic Models and LSA for Modeling Sentences
By explicitly modeling missing words, we set another criterion to the latent semantics profile: it should not be related to the missing words in the sentence.
Limitations of Topic Models and LSA for Modeling Sentences
It would be desirable if topic models can exploit missing words (a lot more data than observed words) to render more nuanced latent semantics , so that pairs of sentences in the same domain can be differentiable.
The Proposed Approach
Accordingly, Pfl; is a K -dimension latent semantics vector profile for word 10,; similarly, Q.,j is the K—dimension vector profile that represents the sentence 83-.
The Proposed Approach
This solution is quite elegant: 1. it explicitly tells the model that in general all missing words should not be related to the sentence; 2. meanwhile latent semantics are mainly generalized based on observed words, and the model is not penalized too much (wm is very small) when it is very confident that the sentence is highly related to a small subset of missing words based on their latent semantics profiles (bank:#n#1 definition sentence is related to its missing words check loan).
latent semantics is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Zweig, Geoffrey and Platt, John C. and Meek, Christopher and Burges, Christopher J.C. and Yessenalina, Ainur and Liu, Qiang
Abstract
We tackle the problem with two approaches: methods that use local lexical information, such as the n-grams of a classical language model; and methods that evaluate global coherence, such as latent semantic analysis.
Introduction
As a first step, we have approached the problem from two points—of—view: first by exploiting local sentence structure, and secondly by measuring a novel form of global sentence coherence based on latent semantic analysis.
Introduction
a novel method based on latent semantic analysis (LSA).
Related Work
That paper also explores the use of Latent Semantic Analysis to measure the degree of similarity between a potential replacement and its context, but the results are poorer than others.
Sentence Completion via Latent Semantic Analysis
Latent Semantic Analysis (LSA) (Deerwester et al., 1990) is a widely used method for representing words and documents in a low dimensional vector space.
latent semantics is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Peng, Xingyuan and Ke, Dengfeng and Xu, Bo
Abstract
Compared with the Latent Semantic Analysis with Support Vector Regression (LSA-SVR) method (stands for the conventional measures), our FST method shows better performance especially towards the ASR transcription.
Related Work
In the LSA-SVR method, each essay transcription is represented by a latent semantic space vector, which is regarded as the features in the SVR model.
Related Work
The LSA (Deerwester et al., 1990) considers the relations between the dimensions in conventional vector space model (VSM) (Salton et al., 1975), and it can order the importance of each dimension in the Latent Semantic Space (LS S).
latent semantics is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: