Index of papers in Proc. ACL 2011 that mention
  • vector representation
Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher
Experiments
Given a query word 21) and another word 21/ we obtain their vector representations gbw and gwa, and evaluate their cosine similarity as 8(gbw, gwa) = By assessing the similarity of 212 with all other words 212’, we can find the words deemed most similar by the model.
Introduction
This component of the model uses the vector representation of words to predict the sentiment annotations on contexts in which the words appear.
Introduction
This causes words expressing similar sentiment to have similar vector representations .
Our Model
The energy function uses a word representation matrix R E R“ X M) where each word 21) (represented as a one-on vector) in the vocabulary V has a 6-dimensional vector representation gbw = Rw corresponding to that word’s column in R. The random variable 6 is also a B-dimensional vector, 6 E R5 which weights each of the 6 dimensions of words’ representation vectors.
Related work
For each latent topic T, the model learns a conditional distribution p(w|T) for the probability that word 21) occurs in T. One can obtain a k:-dimensional vector representation of words by first training a k-topic model and then filling the matrix with the p(w|T) values (normalized to unit length).
vector representation is mentioned in 5 sentences in this paper.
Topics mentioned in this paper: