Diversity-based Ranking | The edges between corresponding nodes (dz) represent the cosine similarity between them is above a threshold (0.10 following (Erkan and Radev, 2004)). |
Diversity-based Ranking | Maximal Marginal Relevance (MMR) (Carbonell and Goldstein, 1998) uses the pairwise cosine similarity matrix and greedily chooses sentences that are the least similar to those already in the summary. |
Diversity-based Ranking | C-LexRank is a clustering-based model in which the cosine similarities of document pairs are used to build a network of documents. |
Prior Work | Once a lexical similarity graph is built, they modify the graph based on cluster information and perform LexRank on the modified cosine similarity graph. |
Conclusion and Future Work | Our experiments show that both methods are able to improve the baseline approach, and we find that the cosine similarity between utterances or between an utterance and the whole document is not as useful as in other document summarization tasks. |
Opinion Summarization Methods | 0 sim(s, D) is the cosine similarity between DA 3 and all the utterances in the dialogue from the same speaker, D. It measures the relevancy of s to the entire dialogue from the target speaker. |
Opinion Summarization Methods | For cosine similarity measure, we use TF*IDF (term frequency, inverse document frequency) term weighting. |
Opinion Summarization Methods | is modeled as an adjacency matrix, where each node represents a sentence, and the weight of the edge between each pair of sentences is their similarity ( cosine similarity is typically used). |
Learning Templates from Raw Text | Vector-based approaches are often adopted to represent words as feature vectors and compute their distance with cosine similarity . |
Learning Templates from Raw Text | Distance is the cosine similarity between bag-of-words vector representations. |
Learning Templates from Raw Text | We measure similarity using cosine similarity between the vectors in both approaches. |