Index of papers in Proc. ACL 2012 that mention
  • cosine similarity
Zweig, Geoffrey and Platt, John C. and Meek, Christopher and Burges, Christopher J.C. and Yessenalina, Ainur and Liu, Qiang
Experimental Results 5.1 Data Resources
Of all the methods in isolation, the simple approach of Section 4.1 — to use the total cosine similarity between a potential answer and the other words in the sentence — has performed best.
Experimental Results 5.1 Data Resources
For the LSA model, the linear combination has three inputs: the total word similarity, the cosine similarity between the sum of the answer word vectors and the sum of the rest of sentence’s word vectors, and the number of out—of—vocabulary terms in the answer.
Sentence Completion via Latent Semantic Analysis
An important property of SVD is that the rows of US — which represents the words — behave similarly to the original rows of W, in the sense that the cosine similarity between two rows in US approximates the cosine similarity between the corre—
Sentence Completion via Latent Semantic Analysis
sponding rows in W. Cosine similarity is defined as
Sentence Completion via Latent Semantic Analysis
Let m be the smallest cosine similarity between h and any word in the vocabulary V: m = minwev sim(h, w).
cosine similarity is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Qu, Zhonghua and Liu, Yang
Thread Structure Tagging
Cosine similarity with previous sentence.
Thread Structure Tagging
Here we use the cosine similarity between sentences, where each sentence is represented as a vector of words, with term weight calculated using TD-IDF (term frequency times inverse document frequency).
Thread Structure Tagging
* Cosine similarity with previous sentence.
cosine similarity is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Huang, Eric and Socher, Richard and Manning, Christopher and Ng, Andrew
Experiments
The nearest neighbors of a word are computed by comparing the cosine similarity between the center word and all other words in the dictionary.
Experiments
Table 1: Nearest neighbors of words based on cosine similarity .
Experiments
Table 2: Nearest neighbors of word embeddings learned by our model using the multi-prototype approach based on cosine similarity .
cosine similarity is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: