Index of papers in Proc. ACL 2011 that mention
  • cosine similarity
Qazvinian, Vahed and Radev, Dragomir R.
Diversity-based Ranking
The edges between corresponding nodes (dz) represent the cosine similarity between them is above a threshold (0.10 following (Erkan and Radev, 2004)).
Diversity-based Ranking
Maximal Marginal Relevance (MMR) (Carbonell and Goldstein, 1998) uses the pairwise cosine similarity matrix and greedily chooses sentences that are the least similar to those already in the summary.
Diversity-based Ranking
C-LexRank is a clustering-based model in which the cosine similarities of document pairs are used to build a network of documents.
Prior Work
Once a lexical similarity graph is built, they modify the graph based on cluster information and perform LexRank on the modified cosine similarity graph.
cosine similarity is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Wang, Dong and Liu, Yang
Conclusion and Future Work
Our experiments show that both methods are able to improve the baseline approach, and we find that the cosine similarity between utterances or between an utterance and the whole document is not as useful as in other document summarization tasks.
Opinion Summarization Methods
0 sim(s, D) is the cosine similarity between DA 3 and all the utterances in the dialogue from the same speaker, D. It measures the relevancy of s to the entire dialogue from the target speaker.
Opinion Summarization Methods
For cosine similarity measure, we use TF*IDF (term frequency, inverse document frequency) term weighting.
Opinion Summarization Methods
is modeled as an adjacency matrix, where each node represents a sentence, and the weight of the edge between each pair of sentences is their similarity ( cosine similarity is typically used).
cosine similarity is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Chambers, Nathanael and Jurafsky, Dan
Learning Templates from Raw Text
Vector-based approaches are often adopted to represent words as feature vectors and compute their distance with cosine similarity .
Learning Templates from Raw Text
Distance is the cosine similarity between bag-of-words vector representations.
Learning Templates from Raw Text
We measure similarity using cosine similarity between the vectors in both approaches.
cosine similarity is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: