Abstract | As a consequence, improving such thesaurus is an important issue that is mainly tackled indirectly through the improvement of semantic similarity measures. |
Introduction | The distinction between these two interpretations refers to the distinction between the notions of semantic similarity and semantic relatedness as it was done in (Budanitsky and Hirst, 2006) or in (Zesch and Gurevych, 2010) for instance. |
Introduction | However, the limit between these two notions is sometimes hard to find in existing work as terms semantic similarity and semantic relatedness are often used interchangeably. |
Introduction | Moreover, semantic similarity is frequently considered as included into semantic relatedness and the two problems are often tackled by using the same methods. |
Principles | of its bad semantic neighbors, that is to say the neighbors of the entry that are actually not semantically similar to the entry. |
Principles | As a consequence, two words are considered as semantically similar if they occur in a large enough set of shared contexts. |
Principles | in a sentence, from all other words and more particularly, from those of its neighbors in a distributional thesaurus that are likely to be actually not semantically similar to it. |
Applications | Turney and Littman (2003) proposed a method in which the SO of a word is calculated based on its semantic similarity with seven positive words minus its similarity with seven negative words as shown in Figure 5. |
Evaluation and Results | PMI extracts the semantic similarity between words using their co—occurrences. |
Evaluation and Results | The second row of Table 4 show the results of using a popular semantic similarity measure, PMI, as the sentiment similarity (SS) measure in Figure 4. |
Hidden Emotional Model | For this purpose, we utilize the semantic similarity between each two words and create an enriched matrix. |
Hidden Emotional Model | To compute the semantic similarity between word senses, we utilize their synsets as follows: |
Introduction | Semantic similarity measures such as Latent Semantic Analysis (LSA) (Landauer et al., 1998) can effectively capture the similarity between semantically related words like "car" and "automobile", but they are less effective in relating words with similar sentiment orientation like "excellent" and "superior". |
Introduction | For example, the following relations show the semantic similarity between some sentiment words computed by LSA: |
Introduction | We show that our approach effectively outperforms the semantic similarity measures in two NLP tasks: Indirect yes/no Question Answer Pairs (IQAPs) Inference and Sentiment Orientation (S0) prediction that are described as follows: |
Related Works | Most previous works employed semantic similarity of word pairs to address SO prediction and IQAP inference tasks. |
Sentiment Similarity through Hidden Emotions | As we discussed above, semantic similarity measures are less effective to infer sentiment similarity between word pairs. |
A Unified Semantic Representation | The WordNet ontology provides a rich network structure of semantic relatedness, connecting senses directly with their hypemyms, and providing information on semantically similar senses by virtue of their nearby locality in the network. |
Abstract | Semantic similarity is an essential component of many Natural Language Processing applications. |
Abstract | However, prior methods for computing semantic similarity often operate at different levels, e.g., single words or entire documents, which requires adapting the method for each data type. |
Abstract | We present a unified approach to semantic similarity that operates at multiple levels, all the way from comparing word senses to comparing text documents. |
Experiment 1: Textual Similarity | Measuring semantic similarity of textual items has applications in a wide variety of NLP tasks. |
Experiment 1: Textual Similarity | As our benchmark, we selected the recent SemEval-2012 task on Semantic Textual Similarity (STS), which was concerned with measuring the semantic similarity of sentence pairs. |
Introduction | Semantic similarity is a core technique for many topics in Natural Language Processing such as Textual Entailment (Berant et al., 2012), Semantic Role Labeling (Furstenau and Lapata, 2012), and Question Answering (Surdeanu et al., 2011). |
Introduction | Approaches to semantic similarity have often operated at separate levels: methods for word similarity are rarely applied to documents or even single sentences (Budanitsky and Hirst, 2006; Radin-sky et al., 2011; Halawi et al., 2012), while document-based similarity methods require more |
Introduction | Despite the potential advantages, few approaches to semantic similarity operate at the sense level due to the challenge in sense-tagging text (Navigli, 2009); for example, none of the top four systems in the recent SemEval-2012 task on textual similarity compared semantic representations that incorporated sense information (Agirre et al., 2012). |
Computational Structures for RE | Combining syntax with semantics has a clear advantage: it generalizes lexical information encapsulated in syntactic parse trees, while at the same time syntax guides semantics in order to obtain an effective semantic similarity . |
Computational Structures for RE | We exploit this idea here for domain adaptation (DA): if words are generalized by semantic similarity LS, then in a hypothetical world changing LS such that it reflects the target domain would |
Computational Structures for RE | The question remains how to establish a link between the semantic similarity in the source and target domain. |
Conclusions and Future Work | We proposed syntactic tree kernels enriched by lexical semantic similarity to tackle the portability of a relation extractor to different domains. |
Introduction | In the empirical evaluation on Automatic Content Extraction (ACE) data, we evaluate the impact of convolution tree kernels embedding lexical semantic similarities . |
Results | Since we focus on evaluating the impact of semantic similarity in tree kernels, we think our system is very competitive. |
Semantic Syntactic Tree Kernels | After introducing related work, we will discuss computational structures for RE and their extension with semantic similarity . |
Framework Overview | 1) Using Tt , we obtain a set of relation translations with a semantic similarity score, T2073, To), for an English relation TE and a Chinese relation r0 (Figure 2 (b), Section 4.3) (e.g., TE =visit and r0 = iii/5]). |
Framework Overview | 2) Using TR, and Tt , we identify a set of semantically similar document pairs that describe the same event with a similarity score TE (d E, do) where d E is an English document and do is a Chinese document (Figure 2 (c), Section 4.4). |
Introduction | In particular, our approach leverages semantically similar document pairs to exclude incomparable parts that appear in one language only. |
Methods | H50”, Tj) if?” 6 RE and W E R0 H(7“i,7“j) = Hb(7“j,7“i) ifrj 6 RE and Ti 6 RC Hm(ri,rj) otherwise Intuitively, |H (7”, r3 indicates the strength of the semantic similarity of two relations Ti and N of any languages. |
Methods | However, as shown in Table 2, we cannot use this value directly to measure the similarity because the support intersection of semantically similar bilingual relations (e.g., |H (head to, i)? |
Methods | We consider that a pair of an English entity 6E and a Chinese entity ea are likely to indicate the same real world entity if they have 1) semantically similar relations to the same entity 2) under the same context. |
Related Work | Semantically similar relation mining |
Related Work | In automatically constructed knowledge bases, finding semantically similar relations can improve understanding of the Web describing content with many different expressions. |
Related Work | NELL (Mohamed et al., 2011) finds related relations using seed pairs of one given relation; then, using K—means clustering, it finds relations that are semantically similar to the given relation. |
Experiments | (4) To understand the effect of utilizing syntactic structure and semantic similarity for constructing the summarization graph, we ran the experiments using just the unigrams and bigrams; we obtained a ROUGE-1 F-score of 37.1. |
Using the Framework | We identify similar views/opinions by computing semantic similarity rather than using standard similarity measures (such as cosine similarity based on exact lexical matches between different nodes in the graph). |
Using the Framework | For each pair of nodes (u,v) in the graph, we compute the semantic similarity score (using WordNet) between every pair of dependency relation (rel: a, b) in u and v as: s(u,v) = Z WN(a,-,aj) >< WN(b,-,bj), |
Using the Framework | Using the syntactic structure along with semantic similarity helps us identify useful (valid) nuggets of information within comments (or documents), avoid redundancies, and identify similar views in a semantic space. |
Conclusion and Future Work | Both of the Meta-path based and social correlation based semantic similarity measurements are proven powerful and complementary. |
Introduction | 0 We model social user behaviors and use social correlation to assist in measuring semantic similarities because the users who posted a morph and its corresponding target tend to share similar interests and opinions. |
Related Work | In this paper we exploit cross-genre information and social correlation to measure semantic similarity . |
Target Candidate Ranking | 4.2.3 Meta-Path-Based Semantic Similarity Measurements |
Experiments | In general, the errors are produced by two different causes acting together: (i) imbalanced distribution of the relations, and (ii) semantic similarity between the relations. |
Experiments | The most frequent relation Elaboration tends to mislead others especially, the ones which are semantically similar (e.g., Explanation, Background) and less frequent (e.g., Summary, Evaluation). |
Experiments | The relations which are semantically similar mislead each other (e.g., Temporal:Background, Cause:Explanation). |
Methodology 2.1 The Problem | When two sentences in 8 or T are not too short, or their content is not divergent in meaning, their semantic similarity can be estimated in terms of common words. |
Methodology 2.1 The Problem | Although semantic similarity estimation is a straightforward approach to deriving the two affinity matrices, other approaches are also feasible. |
Methodology 2.1 The Problem | To demonstrate the validity of the monolingual consistency, the semantic similarity defined by is evaluated as follows. |