Abstract | We adopt three cohesion measures: clue words, semantic similarity and cosine similarity as the weight of the edges. |
Empirical Evaluation | In Section 3.3, we developed three ways to compute the weight of an edge in the sentence quotation graph, i.e., clue words, semantic similarity based on WordNet and cosine similarity. |
Empirical Evaluation | Table 1 shows the aggregated pyramid precision over all five summary lengths of CWS, CWS-Cosine, two semantic similarities , i.e., CWS-lesk and CWS-jcn. |
Extracting Conversations from Multiple Emails | 3.3.2 Semantic Similarity Based on WordNet |
Extracting Conversations from Multiple Emails | Based on this observation, we propose to use semantic similarity to measure the cohesion between two sentences. |
Extracting Conversations from Multiple Emails | We use the well-known lexical database WordNet to get the semantic similarity of two words. |
Introduction | (Carenini et al., 2007), semantic similarity and cosine similarity. |
Related Work | Second, we only adopted one cohesion measure (clue words that are based on stemming), and did not consider more sophisticated ones such as semantically similar words. |
Summarization Based on the Sentence Quotation Graph | In the rest of this paper, let CWS denote the Generalized ClueWordSummarizer when the edge weight is based on clue words, and let CWS-Cosine and CWS-Semantic denote the summarizer when the edge weight is cosine similarity and semantic similarity respectively. |
Query Expansion in Axiomatic Retrieval Model | where s(q, d) is a semantic similarity function between two terms q and d, and f is a monotonically increasing function defined as |
Query Expansion in Axiomatic Retrieval Model | where 6 is a parameter that regulates the weighting of the original query terms and the semantically similar terms. |
Query Expansion in Axiomatic Retrieval Model | In our previous study (Fang and Zhai, 2006), term similarity function 3 is derived based on the mutual information of terms over collections that are constructed under the guidance of a set of term semantic similarity constraints. |
Term Similarity based on Lexical Resources | Since the definition provides valuable information about the semantic meaning of a term, we can use the definitions of the terms to measure their semantic similarity . |
Term Similarity based on Lexical Resources | Thus, we can compute the term semantic similarity based on synset definitions in the following way: |
Context and Answer Detection | here, we use the product of sim(xu, Qi) and sim(:cv, {mm 62,} to estimate the possibility of being a context-answer pair for (u, v) , where sim(-, is the semantic similarity calculated on WordNet as described in Section 3.5. |
Context and Answer Detection | The similarity feature is to capture the word similarity and semantic similarity between candidate contexts and answers. |
Context and Answer Detection | The semantic similarity between words is computed based on Wu and Palmer’s measure (Wu and Palmer, 1994) using WordNet (Fellbaum, 1998).1 The similarity between contiguous sentences will be used to capture the dependency for CRFs. |
Discussion | The resulting vector is sparser but expresses more succinctly the meaning of the predicate-argument structure, and thus allows semantic similarity to be modelled more accurately. |
Evaluation Setup | 3We assessed a wide range of semantic similarity measures using the WordNet similarity package (Pedersen et a1., 2004). |
Evaluation Setup | Following previous work (Bullinaria and Levy, 2007), we optimized its parameters on a word-based semantic similarity task. |
Introduction | The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar (Harris, 1968). |
Using Translation Probability | FAQ Finder (Burke et al., 1997) heuristically combines statistical similarities and semantic similarities between questions to rank FAQs. |
Using Translation Probability | Conventional vector space models are used to calculate the statistical similarity and WordNet (Fellbaum, 1998) is used to estimate the semantic similarity . |
Using Translation Probability | In contrast to that, question search retrieves answers for an unlimited range of questions by focusing on finding semantically similar questions in an archive. |