Index of papers in Proc. ACL 2008 that mention
  • cosine similarity
Carenini, Giuseppe and Ng, Raymond T. and Zhou, Xiaodong
Abstract
We adopt three cohesion measures: clue words, semantic similarity and cosine similarity as the weight of the edges.
Conclusions
We adopt three cohesion metrics, clue words, semantic similarity and cosine similarity , to measure the weight of the edges.
Empirical Evaluation
In Section 3.3, we developed three ways to compute the weight of an edge in the sentence quotation graph, i.e., clue words, semantic similarity based on WordNet and cosine similarity .
Empirical Evaluation
The widely used cosine similarity does not perform well.
Empirical Evaluation
The above experiments show that the widely used cosine similarity and the more sophisticated semantic similarity in WordNet are less accurate than the basic CWS in the summarization framework.
Extracting Conversations from Multiple Emails
and (3) cosine similarity that is based on the word TFIDF vector.
Extracting Conversations from Multiple Emails
3.3.3 Cosine Similarity
Extracting Conversations from Multiple Emails
Cosine similarity is a popular metric to compute the similarity of two text units.
Introduction
(Carenini et al., 2007), semantic similarity and cosine similarity .
Summarization Based on the Sentence Quotation Graph
In the rest of this paper, let CWS denote the Generalized ClueWordSummarizer when the edge weight is based on clue words, and let CWS-Cosine and CWS-Semantic denote the summarizer when the edge weight is cosine similarity and semantic similarity respectively.
cosine similarity is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Ding, Shilin and Cong, Gao and Lin, Chin-Yew and Zhu, Xiaoyan
Context and Answer Detection
The word similarity is based on cosine similarity of TF/IDF weighted vectors.
Context and Answer Detection
- Cosine similarity with the question
Context and Answer Detection
- Cosine similarity between contiguous sentences
Related Work
(2006a) used cosine similarity to match students’ query with reply posts for discussion-bot.
cosine similarity is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Saha, Sujan Kumar and Mitra, Pabitra and Sarkar, Sudeshna
Introduction
For clustering we use a number of word similarities like cosine similarity among words and co-occurrence, along with the k-means clustering algorithm.
Word Clustering
3.1 Cosine Similarity based on Sentence Level Co-occurrence
Word Clustering
Then we measure cosine similarity between the word vectors.
Word Clustering
The cosine similarity between two word vectors (fl and E”) with dimension d is measured as:
cosine similarity is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Bhagat, Rahul and Ravichandran, Deepak
Acquiring Paraphrases
We use cosine similarity , which
Acquiring Paraphrases
As described in Section 3.2, we find paraphrases of a phrase p,- by finding its nearest neighbors based on cosine similarity between the feature vector of pi and other phrases.
Acquiring Paraphrases
If n is the number of vectors and d is the dimensionality of the vector space, finding cosine similarity between each pair of vectors has time complexity 0(n2 d).
cosine similarity is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Kulkarni, Anagha and Callan, Jamie
Finding the Homographs in a Lexicon
Cohesiveness Score: Mean of the cosine similarities between each pair of definitions of w.
Finding the Homographs in a Lexicon
Average Number of Null Similarities: The number of definition pairs that have zero cosine similarity score (no word overlap).
Finding the Homographs in a Lexicon
The last feature sorts the pairwise cosine similarity scores in ascending order, prunes the top n% of the scores, and uses the maximum remaining score as the feature value.
cosine similarity is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Nenkova, Ani and Louis, Annie
Features
Cosine similarity between the document vector representations is probably the easiest and most commonly used among the various similarity measures.
Features
The cosine similarity between two (document representation) vectors v1 and 212 is given by 0036 = W. A value of 0 indicates that the vectors are orthogonal and dissimilar, a value of 1 indicates perfectly similar documents in terms of the words con-
Features
To compute the cosine overlap features, we find the pairwise cosine similarity between each two documents in an input set and compute their average.
cosine similarity is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: