Abstract | This paper proposes a knowledge-based method, called Structural Semantic Relatedness (SSR), which can enhance the named entity disambiguation by capturing and leveraging the structural semantic knowledge in multiple knowledge sources. |
Introduction | This model measures similarity based on only the co-occurrence statistics of terms, without considering all the semantic relations like social relatedness between named entities, associative relatedness between concepts, and lexical relatedness (e.g., acronyms, synonyms) between key terms. |
Introduction | For example, as shown in Figure 2, the link structure of Wikipedia contains rich semantic relations between concepts. |
Introduction | The problem of these knowledge sources is that they are heterogeneous (e.g., they contain different types of semantic relations and different types of concepts) and most of the semantic knowledge within them is embedded in complex structures, such as graphs and networks. |
Abstract | A challenging problem in open information extraction and text mining is the learning of the selectional restrictions of semantic relations . |
Abstract | We propose a minimally supervised bootstrapping algorithm that uses a single seed and a recursive lexico-syntactic pattern to learn the arguments and the supertypes of a diverse set of semantic relations from the Web. |
Abstract | We evaluate the performance of our algorithm on multiple semantic relations expressed using “verb”, “noun”, and “verb prep” lexico-syntactic patterns. |
Introduction | (Pennacchiotti and Pantel, 2006) proposed an algorithm for automatically ontologizing semantic relations into WordNet. |
Introduction | Given these considerations, we address in this paper the following question: How can the selec-ti0nal restrictions of semantic relations be learned automatically from the Web with minimal eflort using lexico-syntactic recursive patterns? |
Introduction | 0 A novel representation of semantic relations using recursive lexico-syntactic patterns. |
Related Work | The middle string denotes some (unspecified) semantic relation while the first and third denote the learned arguments of this relation. |
Related Work | But TextRunner does not seek specific semantic relations , and does not reuse the patterns it harvests with different arguments in order to extend their yields. |
Related Work | Clearly, it is important to be able to specify both the actual semantic relation sought and use its textual expression(s) in a controlled manner for maximal benefit. |
Conclusion | In this paper, we present a novel, fine-grained taxonomy of 43 noun-noun semantic relations , the largest annotated noun compound dataset yet created, and a supervised classification method for automatic noun compound interpretation. |
Evaluation | Kim and Baldwin (2005) report an agreement of 52.31% (not H) for their dataset using Barker and Sz-pakowicz’s (1998) 20 semantic relations . |
Evaluation | (2005) report .58 K: using a set of 35 semantic relations , only 21 of which were used, and a .80 H score using Lauer’s 8 prepositional paraphrases. |
Evaluation | Girju (2007) reports .61 H agreement using a similar set of 22 semantic relations for noun compound annotation in which the annotators are shown translations of the compound in foreign languages. |
Future Work | In the future, we plan to focus on the interpretation of noun compounds with 3 or more nouns, a problem that includes bracketing noun compounds into their dependency structures in addition to noun-noun semantic relation interpretation. |
Related Work | In contrast to studies that claim the existence of a relatively small number of semantic relations , Downing (1977) presents a strong case for the existence of an unbounded number of relations. |
Taxonomy | Table 1: The semantic relations , their frequency ir |
BabelNet | Each edge is labeled with a semantic relation from R, e.g. |
BabelNet | , e}, where 6 denotes an unspecified semantic relation . |
Conclusions | amounts of semantic relations and can be leveraged to enable multilinguality. |
Conclusions | The resource includes millions of semantic relations , mainly from Wikipedia (however, WordNet relations are labeled), and contains almost 3 million concepts (6.7 labels per concept on average). |
Introduction | The result is an “encyclopedic dictionary”, that provides concepts and named entities lexical-ized in many languages and connected with large amounts of semantic relations . |
Related Work | However, while providing lexical resources on a very large scale for hundreds of thousands of language pairs, these do not encode semantic relations between concepts denoted by their lexical entries. |
Experiments | The main reason for the improvements is that the DBN based approach is able to learn semantic relationship between the words in QA pairs from the training set. |
Introduction | How to model the semantic relationship between two short texts using simple textual features? |
Introduction | The network establishes the semantic relationship for QA pairs by minimizing the answer-to-question reconstructing error. |
Related Work | Judging whether a candidate answer is semantically related to the question in the cQA page automatically is a challenging task. |
Related Work | The SMT based methods are effective on modeling the semantic relationship between questions and answers and expending users’ queries in answer retrieval (Riezler et al., 2007; Berger et al., |
The Deep Belief Network for QA pairs | In this section, we propose a deep belief network for modeling the semantic relationship between questions and their answers. |
Abstract | Information-extraction (IE) systems seek to distill semantic relations from natural-language text, but most systems use supervised learning of relation-specific examples and are thus limited by the availability of training data. |
Abstract | Like TextRunner, WOE’s extractor eschews lexicalized features and handles an unbounded set of semantic relations . |
Problem Definition | An open information extractor is a function from a document, d, to a set of triples, {(argl, rel, arg2>}, where the args are noun phrases and rel is a textual fragment indicating an implicit, semantic relation between the two noun phrases. |
Wikipedia-based Open IE | WOEparse uses a pattern learner to classify whether the shortest dependency path between two noun phrases indicates a semantic relation . |
Word Polarity | For example, the synonyms of any word are semantically related to it. |
Word Polarity | The intuition behind that connecting semantically related words is that those words tend to have similar polarity. |
Word Polarity | We construct a network where two nodes are linked if they are semantically related . |
Algorithm Analysis | The variation in scoring illustrates that different algorithms are more effective at capturing certain semantic relations . |
Benchmarks | Word association tests measure the semantic relatedness of two words by comparing word space similarity with human judgements. |
Benchmarks | This test is notably more challenging for word space models because human ratings are not tied to a specific semantic relation . |
Introduction | The first type is semantic prediction, as evidenced in semantic priming: a word that is preceded by a semantically related prime or a semantically congruous sentence fragment is processed faster (Stanovich and West 1981; van Berkum et al. |
Introduction | Comprehenders are faster at naming words that are syntactically compatible with prior context, even when they bear no semantic relationship to the context (Wright and Garrett 1984). |
Models of Processing Difficulty | Distributional models of meaning have been commonly used to quantify the semantic relation between a word and its context in computational studies of lexical processing. |