Index of papers in Proc. ACL that mention
  • semantic relations
Kozareva, Zornitsa and Hovy, Eduard
Abstract
A challenging problem in open information extraction and text mining is the learning of the selectional restrictions of semantic relations .
Abstract
We propose a minimally supervised bootstrapping algorithm that uses a single seed and a recursive lexico-syntactic pattern to learn the arguments and the supertypes of a diverse set of semantic relations from the Web.
Abstract
We evaluate the performance of our algorithm on multiple semantic relations expressed using “verb”, “noun”, and “verb prep” lexico-syntactic patterns.
Introduction
(Pennacchiotti and Pantel, 2006) proposed an algorithm for automatically ontologizing semantic relations into WordNet.
Introduction
Given these considerations, we address in this paper the following question: How can the selec-ti0nal restrictions of semantic relations be learned automatically from the Web with minimal eflort using lexico-syntactic recursive patterns?
Introduction
0 A novel representation of semantic relations using recursive lexico-syntactic patterns.
Related Work
The middle string denotes some (unspecified) semantic relation while the first and third denote the learned arguments of this relation.
Related Work
But TextRunner does not seek specific semantic relations , and does not reuse the patterns it harvests with different arguments in order to extend their yields.
Related Work
Clearly, it is important to be able to specify both the actual semantic relation sought and use its textual expression(s) in a controlled manner for maximal benefit.
semantic relations is mentioned in 31 sentences in this paper.
Topics mentioned in this paper:
Kim, Seokhwan and Lee, Gary Geunbae
Conclusions
Experimental results show that our graph-based projection helped to improve the performance of the cross-lingual annotation projection of the semantic relations , and our system outperforms the other systems, which incorporate monolingual external resources.
Cross-lingual Annotation Projection for Relation Extraction
Although some noise reduction strategies for projecting semantic relations were proposed (Kim et al., 2010), the direct projection approach is still vulnerable to erroneous inputs generated by submodules.
Graph Construction
Graph construction for projecting semantic relationships is more complicated than part-of-speech tagging because the unit instance of projection is a pair of entities and not a word or morpheme that is equivalent to the alignment unit.
Graph Construction
The larger the y+ value, the more likely the instance has a semantic relationship .
Graph Construction
The other type of vertices, context vertices, are used for identifying relation descriptors that are contextual subtexts that represent semantic relationships of the positive instances.
Implementation
Table 1: Comparison between direct and graph-based projection approaches to extract semantic relationships for four relation types
Introduction
Relation extraction aims to identify semantic relations of entities in a document.
Introduction
Several datasets that provide manual annotations of semantic relationships are available from MUC (Grishman and Sund-heim, 1996) and ACE (Doddington et al., 2004) projects, but these datasets contain labeled training examples in only a few major languages, including English, Chinese, and Arabic.
Introduction
Because manual annotation of semantic relations for such resource-poor languages is very expensive, we instead consider weakly supervised learning techniques (Riloff and Jones, 1999; Agichtein and Gravano, 2000; Zhang, 2004; Chen et al., 2006) to learn the relation extractors without significant annotation efforts.
semantic relations is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Liu, Kang and Xu, Liheng and Zhao, Jun
Abstract
First, compared to previous methods which solely employed opinion relations among words, our method constructs a heterogeneous graph to model two types of relations, including semantic relations and opinion relations.
Introduction
We call such relations between homogeneous words as semantic relations .
Introduction
Intuitively, besides opinion relations, semantic relations may provide additional rich clues for indicating opinion targets/words.
Introduction
Solid curves and dotted lines respectively mean semantic relations and opinion relations between two candidates.
Related Work
However, all aforementioned methods only employed opinion relations for the extraction, but ignore considering semantic relations among homogeneous candidates.
Related Work
In terms of considering semantic relations among words, our method is related with several approaches based on topic model (Zhao et al., 2010; Moghaddam and Ester, 2011; Moghaddam and Ester, 2012a; Moghaddam and Ester, 2012b; Mukherjee and Liu, 2012).
Related Work
Although these models could be used for our task according to the associations between candidates and topics, solely employing semantic relations is still one-sided and insufficient to obtain expected performance.
The Proposed Method
Different from traditional methods, besides opinion relations among words, we additionally capture semantic relations among homogeneous candidates.
The Proposed Method
E” C E represents the semantic relations between two opinion target candidates.
semantic relations is mentioned in 33 sentences in this paper.
Topics mentioned in this paper:
Bernhard, Delphine and Gurevych, Iryna
Abstract
We also show that the monolingual translation probabilities obtained (i) are comparable to traditional semantic relatedness measures and (ii) significantly improve the results over the query likelihood and the vector-space model for answer finding.
Introduction
To do so, we compare translation probabilities with concept vector based semantic relatedness measures with respect to human relatedness rankings for reference word pairs.
Introduction
Section 2 discusses related work on semantic relatedness and statistical translation models for retrieval.
Introduction
Semantic relatedness experiments are detailed in Section 4.
Parallel Datasets
the different kinds of data encode different types of information, including semantic relatedness and similarity, as well as morphological relatedness.
Related Work
2.2 Semantic Relatedness
Related Work
While classical measures of semantic relatedness have been extensively studied and compared, based on comparisons with human relatedness judgements or word-choice problems, there is no comparable intrinsic study of the relatedness measures obtained through word translation probabilities.
Related Work
In this study, we use the correlation with human rankings for reference word pairs to investigate how word translation probabilities compare with traditional semantic relatedness measures.
Semantic Relatedness Experiments
The aim of this first experiment is to perform an intrinsic evaluation of the word translation probabilities obtained by comparing them to traditional semantic relatedness measures on the task of ranking word pairs.
Semantic Relatedness Experiments
Human judgements of semantic relatedness can be used to evaluate how well semantic relatedness measures reflect human rankings by correlating their ranking results with Spearman’s rank correlation coefficient.
semantic relations is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Huang, Hongzhao and Cao, Yunbo and Huang, Xiaojiang and Ji, Heng and Lin, Chin-Yew
Abstract
In order to identify semantically-related mentions for collective inference, we detect meta path-based semantic relations through social networks.
Principles and Approach Overview
semantIc relatedness ) SImIIarIty)
Principles and Approach Overview
Principle 3 ( Semantic Relatedness ): Two highly semantically-related mentions are more likely to be linked to two highly semantically-related concepts.
Principles and Approach Overview
The label assignment is obtained by our semi-supervised graph regularization framework based on a relational graph, which is constructed from local compatibility, coreference, and semantic relatedness relations.
Relational Graph Construction
In this subsection, we introduce the concept meta path which will be used to detect coreference (section 4.3) and semantic relatedness relations (section 4.4).
Relational Graph Construction
Each meta path represents one particular semantic relation .
Relational Graph Construction
4.4 Semantic Relatedness
semantic relations is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Hashimoto, Chikara and Torisawa, Kentaro and Kloetzer, Julien and Sano, Motoki and Varga, István and Oh, Jong-Hoon and Kidawara, Yutaka
Abstract
We propose a supervised method of extracting event causalities like conduct slash-and—barn agriculture—>exacerbate desertification from the web using semantic relation (between nouns), context, and association features.
Event Causality Extraction Method
3.2.1 Semantic Relation Features
Event Causality Extraction Method
We hypothesize that two nouns with some particular semantic relations are more likely to constitute event causality.
Event Causality Extraction Method
Below we describe the semantic relations that we believe are likely to constitute event causality.
Introduction
slash-and—burn agriculture and desertification) that take some specific binary semantic relations (e.g.
Introduction
Note that semantic relations are not restricted to those directly relevant to causality like A CAUSES B but can be those that might seem irrelevant to causality like A IS AN INGREDIENT FOR B (e.g.
Introduction
Our underlying intuition is the observation that event causality tends to hold between two entities linked by semantic relations which roughly entail that one entity strongly affects the other.
Related Work
Besides features similar to those described above, we propose semantic relation features3 that include those that are not obviously related to causality.
Related Work
(2012) used semantic relations to generalize acquired causality instances.
semantic relations is mentioned in 25 sentences in this paper.
Topics mentioned in this paper:
Yao, Limin and Riedel, Sebastian and McCallum, Andrew
Abstract
We merge these sense clusters into semantic relations using hierarchical agglomerative clustering.
Evaluations
Pairwise metrics measure how often two tuples which are clustered in one semantic relation are labeled with the same Freebase label.
Experiments
DIRT calculates distributional similarities between different paths to find paths which bear the same semantic relation .
Introduction
Relation extraction (RE) is the task of determining semantic relations between entities mentioned in text.
Our Approach
We induce pattern senses by clustering the entity pairs associated with a pattern, and discover semantic relations by clustering these sense clusters.
Our Approach
We take each sense cluster of a pattern as an atomic cluster, and use hierarchical agglomerative clustering to organize them into semantic relations .
Our Approach
Therefore, a semantic relation comprises a set of sense clusters of patterns.
Related Work
Both use distributional similarity to find patterns representing similar semantic relations .
semantic relations is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Tratz, Stephen and Hovy, Eduard
Abstract
The English ’5 possessive construction occurs frequently in text and can encode several different semantic relations ; however, it has received limited attention from the computational linguistics community.
Abstract
This paper describes the creation of a semantic relation inventory covering the use of ’s, an inter-annotator agreement study to calculate how well humans can agree on the relations, a large collection of possessives annotated according to the relations, and an accurate automatic annotation system for labeling new examples.
Background
Badulescu and Moldovan (2009) investigate both ’s-constructions and 0f constructions in the same context using a list of 36 semantic relations (including OTHER).
Background
For the 960 extracted ’s—possessive examples, only 20 of their semantic relations are observed, including OTHER, with 8 of the observed relations occurring fewer than 10 times.
Background
Also, it is sometimes difficult to understand the meaning of the semantic relations , partly because most relations are only described by a single example and, to a lesser extent, because the bulk of the given examples are of-constructions.
Introduction
The English ’5 possessive construction occurs frequently in text—approximately 1.8 times for every 100 hundred words in the Penn Treebank1 (Marcus et al., l993)—and can encode a number of different semantic relations including ownership (John ’5 car), part-of-whole (John ’5 arm), extent (6 hours’ drive), and location (America ’s rivers).
Introduction
These interpretations could be valuable for machine translation to or from languages that allow different semantic relations to be encoded by
Introduction
This paper presents an inventory of 17 semantic relations expressed by the English ’s—construction, a large dataset annotated according to the this inventory, and an accurate automatic classification system.
Semantic Relation Inventory
The initial semantic relation inventory for possessives was created by first examining some of the relevant literature on possessives, including work by Badulescu and Moldovan (2009), Barker (1995), Quirk et al.
Semantic Relation Inventory
Table 2: The semantic relations proposed by Quirk et al.
semantic relations is mentioned in 19 sentences in this paper.
Topics mentioned in this paper:
Ferret, Olivier
Abstract
However, they are far from containing only interesting semantic relations .
Experiments and evaluation
cerns the type of semantic relations : results with Moby as reference are improved in a larger extent than results with WordNet as reference.
Experiments and evaluation
This suggests that our procedure is more effective for semantically related words than for semantically similar words, which can be considered as a little bit surprising since the notion of context in our discriminative classifier seems a priori more strict than in “classical” distributional contexts.
Introduction
The term semantic neighbor is very generic and can have two main interpretations according to the kind of semantic relations it is based on: one relies only on paradigmatic relations, such as hy-pernymy or synonymy, while the other consid-
Introduction
The distinction between these two interpretations refers to the distinction between the notions of semantic similarity and semantic relatedness as it was done in (Budanitsky and Hirst, 2006) or in (Zesch and Gurevych, 2010) for instance.
Introduction
However, the limit between these two notions is sometimes hard to find in existing work as terms semantic similarity and semantic relatedness are often used interchangeably.
Related work
The building of distributional thesaurus is generally viewed as an application or a mode of evaluation of work about semantic similarity or semantic relatedness .
Related work
arises from the imbalance between semantic similarity and semantic relatedness among training examples: most of selected examples were pairs of words linked by semantic relatedness because this kind of relations are more frequent among semantic neighbors than relations based on semantic similarity.
semantic relations is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Han, Xianpei and Zhao, Jun
Abstract
This paper proposes a knowledge-based method, called Structural Semantic Relatedness (SSR), which can enhance the named entity disambiguation by capturing and leveraging the structural semantic knowledge in multiple knowledge sources.
Introduction
This model measures similarity based on only the co-occurrence statistics of terms, without considering all the semantic relations like social relatedness between named entities, associative relatedness between concepts, and lexical relatedness (e.g., acronyms, synonyms) between key terms.
Introduction
For example, as shown in Figure 2, the link structure of Wikipedia contains rich semantic relations between concepts.
Introduction
The problem of these knowledge sources is that they are heterogeneous (e.g., they contain different types of semantic relations and different types of concepts) and most of the semantic knowledge within them is embedded in complex structures, such as graphs and networks.
semantic relations is mentioned in 52 sentences in this paper.
Topics mentioned in this paper:
Yan, Yulan and Okazaki, Naoaki and Matsuo, Yutaka and Yang, Zhenglu and Ishizuka, Mitsuru
Characteristics of Wikipedia articles
A common assumption is that, when investigating the semantics in articles such as those in Wikipedia (e. g. semantic Wikipedia (Volkel et al., 2006)), key information related to a concept described on a page p lies within the set of links l(p) on that page; particularly, it is likely that a salient semantic relation 7“ exists between p and a related page 19’ E l(p).
Conclusions
To discover a range of semantic relations from a large corpus, we present an unsupervised relation extraction method using deep linguistic information to alleviate surface and noisy surface patterns generated from a large corpus, and use Web frequency information to ease the sparseness of linguistic information.
Introduction
A salient challenge and research interest for frequent pattern mining is abstraction away from different surface realizations of semantic relations to discover discriminative patterns efficiently.
Introduction
Linguistic analysis is another effective technology for semantic relation extraction, as described in many reports such as (Kambhatla, 2004); (Bunescu and Mooney, 2005); (Harabagiu et al., 2005); (Nguyen et al., 2007).
Introduction
Currently, linguistic approaches for semantic relation extraction are mostly supervised, relying on pre-specification of the desired relation or initial seed words or patterns from hand-coding.
Pattern Combination Method for Relation Extraction
Given a concept described in a Wikipedia article, our idea of preprocessing executes initial consideration of all anchor-text concepts linking to other Wikipedia articles in the article as related concepts that might share a semantic relation with the entitled concept.
Pattern Combination Method for Relation Extraction
Querying a concept pair using a search engine (Google), we characterize the semantic relation between the pair by leveraging the vast size of the Web.
Pattern Combination Method for Relation Extraction
A salient difficulty posed by dependency pattern clustering is that concept pairs of the same semantic relation cannot be merged if they are expressed in different dependency structures.
Related Work
(Turney, 2006) presented an unsupervised algorithm for mining the Web for patterns expressing implicit semantic relations .
Related Work
In addition, to obtain semantic information for concept pairs, we generate dependency patterns to abstract away from different surface realizations of semantic relations .
semantic relations is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Muller, Philippe and Fabre, Cécile and Adam, Clémentine
Abstract
We first set up a human annotation of semantic links with or without contextual information to show the importance of the textual context in evaluating the relevance of semantic similarity, and to assess the prevalence of actual semantic relations between word tokens.
Conclusion
This helps cover non classical semantic relations which are hard to evaluate with classical resources.
Evaluation of lexical similarity in context
In other words, is there a semantic relation between them, either classical (synonymy, hypernymy, co-hyponymy, meronymy, co-meronymy) or not (the relation can be paraphrased but does not belong to the previous cases) ?”
Introduction
They are not suitable for the evaluation of the whole range of semantic relatedness that is exhibited by distributional similarities, which exceeds the limits of classical lexical relations, even though researchers have tried to collect equivalent resources manually, to be used as a gold standard (Weeds, 2003; Bordag, 2008; Anguiano et al., 2011).
Introduction
One advantage of distributional similarities is to exhibit a lot of different semantic relations , not necessarily standard lexical relations.
Introduction
spective, to cover what (Morris and Hirst, 2004) call “non classical lexical semantic relations” .
Related work
We differ from all these evaluation procedures as we do not focus on an essential view of the relatedness of two lexical items, but evaluate the link in a context Where the relevance of the link is in question, an “existential” view of semantic relatedness .
semantic relations is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Charton, Eric and Meurs, Marie-Jean and Jean-Louis, Ludovic and Gagnon, Michel
Introduction
Such techniques are referred to as semantic relatedness (Strube and Ponzetto, 2006), collective disambiguation (Hoffart et al., 2011b), or joint disambiguation (Fahrni et al., 2012).
Introduction
For example, if a NE describes a city name like Paris, it is more probable that the correct link for this city name designates Paris (France) rather than Paris (Texas) if a neighbor entity offers candidate links semantically related to Paris (France) like the Seine river or the Champs-Elyse’es.
Introduction
The paper makes the following novel propositions: l) the ontology used to evaluate the relatedness of candidates is replaced by internal links and categories from the Wikipedia corpus; 2) the coherence of entities is improved prior to the calculation of semantic relatedness using a co-reference resolution algorithm, and a NE label correction method; 3) the proposed method is robust enough to improve the performance of existing entity linking annotation engines, which are capable of providing a set of ranked candidates for each annotation in a document.
Proposed Algorithm
A basic example of semantic relatedness that should be captured is explained hereafter.
Proposed Algorithm
The purpose of the MDP is to capture this semantic relatedness information contained in the graph of links extracted from Wikipedia pages related to each candidate annotation.
Proposed Algorithm
The calculation combines two scores that we called direct semantic relation score (dsr_score) and common semantic relation score (csr_score):
Related Work
also introduced the notion of semantic relatedness .
Related Work
While all these approaches focus on semantic relation between entities, their potential is limited by the separate mapping of candidate links for each mention.
Related Work
Only some of these systems introduce the semantic relatedness in their methods like the AIDA (Hoffart et al., 2011b) system.
semantic relations is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Davidov, Dmitry and Rappoport, Ari
Abstract
There are many possible different semantic relationships between nominals.
Abstract
Each of the extracted clusters corresponds to some unspecified semantic relationship .
Introduction
Automatic extraction and classification of semantic relationships is a major field of activity, of both practical and theoretical interest.
Introduction
A prominent type of semantic relationships is that holding between nonnnabl.Forexanqfle,ninouncxnnpoundsrnany different semantic relationships are encoded by the same simple form (Girju et al., 2005): ‘dog food’ denotes food consumed by dogs, while ‘summer mom-
Introduction
The semantic relationships between the components of noun compounds and between nominals in general are not easy to categorize rigorously.
Pattern Clustering Algorithm
Our pattern clustering algorithm is designed for the unsupervised definition and discovery of generic semantic relationships .
Related Work
Numerous methods have been devised for classification of semantic relationships , among which those holding between nominals constitute a prominent category.
Related Work
Since (Hearst, 1992), numerous works have used patterns for discovery and identification of instances of semantic relationships (e.g., (Girju et al., 2006; Snow et al., 2006; Banko et al, 2007)).
semantic relations is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Fu, Ruiji and Guo, Jiang and Qin, Bing and Che, Wanxiang and Wang, Haifeng and Liu, Ting
Abstract
This paper proposes a novel and effective method for the construction of semantic hierarchies based on word embeddings, which can be used to measure the semantic relationship between words.
Background
In this paper, we aim to identify hypemym—hyponym relations using word embeddings, which have been shown to preserve good properties for capturing semantic relationship between words.
Introduction
have attempted to automatically extract semantic relations or to construct taxonomies.
Introduction
Word embeddings have been empirically shown to preserve linguistic regularities, such as the semantic relationship between words (Mikolov et al., 2013b).
Method
Additionally, their experiment results have shown that the Skip-gram model performs best in identifying semantic relationship among words.
Method
Looking at the well-known example: v(king) — v(queen) % v(man) —v(woman), it indicates that the embedding offsets indeed represent the shared semantic relation between the two word pairs.
Method
The reasons are twofold: (l) Mikolov’s work has shown that the vector offsets imply a certain level of semantic relationship .
Related Work
(2013b) further observe that the semantic relationship of words can be induced by performing simple algebraic operations with word vectors.
semantic relations is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Pilehvar, Mohammad Taher and Navigli, Roberto
Experiments
As mentioned in Section 2.1.1, we build the WN graph by including all the synsets and semantic relations defined in WordNet (e.g., hypernymy and meronymy) and further populate the relation set by connecting a synset to all the other synsets that appear in its disambiguated gloss.
Experiments
The other two resources, i.e., WT and OW, do not provide a reliable network of semantic relations , therefore we used our ontologization approach to construct their corresponding semantic graphs.
Introduction
However, not all lexical resources provide explicit semantic relations between concepts and, hence, machine-readable dictionaries like Wiktionary have first to be transformed into semantic graphs before such graph-based approaches can be applied to them.
Lexical Resource Ontologization
Our ontologization algorithm takes as input a lexicon L and outputs a semantic graph G = (V, E) where, as already defined in Section 2, V is the set of concepts in L and E is the set of semantic relations between these concepts.
Related Work
usually the case with machine-readable dictionaries, where structuring the resource involves the arduous task of connecting lexicographic senses by means of semantic relations .
Resource Alignment
Therefore, we assume that a lexical resource L can be represented as an undirected graph G = (V, E) where V is the set of nodes, i.e., the concepts defined in the resource, and E is the set of undirected edges, i.e., semantic relations between concepts.
Resource Alignment
However, other resources such as Wiktionary do not provide semantic relations between concepts and, therefore, have first to be transformed into semantic networks before they can be aligned using our alignment algorithm.
semantic relations is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Davidov, Dmitry and Rappoport, Ari
Abstract
We present a novel framework for the discovery and representation of general semantic relationships that hold between lexical items.
Conclusion
Each such cluster is set of patterns that can be used to identify, classify or capture new instances of some unspecified semantic relationship .
Related Work
They aim to find relationship instances rather than identify generic semantic relationships .
SAT-based Evaluation
As discussed in Section 2, the evaluation of semantic relationship structures is nontrivial.
SAT-based Evaluation
The first is the quality (precisiorflrecall) of individual pattern clusters: does each pattern cluster capture lexical item pairs of the same semantic relationship ?
SAT-based Evaluation
does it recognize many pairs of the same semantic relationship ?
semantic relations is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Tratz, Stephen and Hovy, Eduard
Conclusion
In this paper, we present a novel, fine-grained taxonomy of 43 noun-noun semantic relations , the largest annotated noun compound dataset yet created, and a supervised classification method for automatic noun compound interpretation.
Evaluation
Kim and Baldwin (2005) report an agreement of 52.31% (not H) for their dataset using Barker and Sz-pakowicz’s (1998) 20 semantic relations .
Evaluation
(2005) report .58 K: using a set of 35 semantic relations , only 21 of which were used, and a .80 H score using Lauer’s 8 prepositional paraphrases.
Evaluation
Girju (2007) reports .61 H agreement using a similar set of 22 semantic relations for noun compound annotation in which the annotators are shown translations of the compound in foreign languages.
Future Work
In the future, we plan to focus on the interpretation of noun compounds with 3 or more nouns, a problem that includes bracketing noun compounds into their dependency structures in addition to noun-noun semantic relation interpretation.
Related Work
In contrast to studies that claim the existence of a relatively small number of semantic relations , Downing (1977) presents a strong case for the existence of an unbounded number of relations.
Taxonomy
Table 1: The semantic relations , their frequency ir
semantic relations is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Nakov, Preslav and Hearst, Marti A.
Abstract
We present a simple linguistically-motivated method for characterizing the semantic relations that hold between two nouns.
Method
Given a pair of nouns, we try to characterize the semantic relation between them by leveraging the vast size of the Web to build linguistically-motivated lexically-specific features.
Related Work
2.1 Characterizing Semantic Relations
Related Work
Turney (2006a) presents an unsupervised algorithm for mining the Web for patterns expressing implicit semantic relations .
Related Work
They test their system against both Lauer’s 8 prepositional paraphrases and another set of 21 semantic relations , achieving up to 54% accuracy on the latter.
Relational Similarity Experiments
We further experimented with the SemEval’07 task 4 dataset (Girju et al., 2007), where each example consists of a sentence, a target semantic relation , two nominals to be judged on whether they are in that relation, manually annotated WordNet senses, and the Web query used to obtain the sentence:
semantic relations is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Wang, Baoxun and Wang, Xiaolong and Sun, Chengjie and Liu, Bingquan and Sun, Lin
Experiments
The main reason for the improvements is that the DBN based approach is able to learn semantic relationship between the words in QA pairs from the training set.
Introduction
How to model the semantic relationship between two short texts using simple textual features?
Introduction
The network establishes the semantic relationship for QA pairs by minimizing the answer-to-question reconstructing error.
Related Work
Judging whether a candidate answer is semantically related to the question in the cQA page automatically is a challenging task.
Related Work
The SMT based methods are effective on modeling the semantic relationship between questions and answers and expending users’ queries in answer retrieval (Riezler et al., 2007; Berger et al.,
The Deep Belief Network for QA pairs
In this section, we propose a deep belief network for modeling the semantic relationship between questions and their answers.
semantic relations is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Aliabadi, Purya
KurdNet: Extension Plan
o to winden the scope (i.e., including Kurmanji synsets), the coverage (i.e., going beyond Base Concepts) , and richness (supporting additional semantic relations ) of the current version.
KurdNet: Shortcomings
3.3 Limited Support for Semantic Relation Types
KurdNet: Shortcomings
As shown in Table 2, there are several WordNet semantic relations for each syntactic categories.
KurdNet: Shortcomings
The most important semantic relation in WordNet is Hyponymy and this relation is the only one support in KurdNet (Aliabadi et al., 2014).
KurdNet: State-of-the-Art
0 Expand: in this model, the synsets are built in correspondence with the WordNet synsets and the semantic relations are directly imported.
Summary
Finding Semantic Relations Creating Graphical User Interface
semantic relations is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Navigli, Roberto and Ponzetto, Simone Paolo
BabelNet
Each edge is labeled with a semantic relation from R, e.g.
BabelNet
, e}, where 6 denotes an unspecified semantic relation .
Conclusions
amounts of semantic relations and can be leveraged to enable multilinguality.
Conclusions
The resource includes millions of semantic relations , mainly from Wikipedia (however, WordNet relations are labeled), and contains almost 3 million concepts (6.7 labels per concept on average).
Introduction
The result is an “encyclopedic dictionary”, that provides concepts and named entities lexical-ized in many languages and connected with large amounts of semantic relations .
Related Work
However, while providing lexical resources on a very large scale for hundreds of thousands of language pairs, these do not encode semantic relations between concepts denoted by their lexical entries.
semantic relations is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Oh, Jong-Hoon and Uchimoto, Kiyotaka and Torisawa, Kentaro
Motivation
This paper proposes a novel framework for a large-scale, accurate acquisition method for monolingual semantic knowledge, especially for semantic relations between nominals such as hyponymy and meronymy.
Motivation
The acquisition of semantic relations between nominals can be seen as a classification task of semantic relations — to determine whether two nominals hold a particular semantic relation (Girju et al., 2007).
Related Work
Recently, there has been increased interest in semantic relation acquisition from corpora.
Related Work
Some regarded Wikipedia as the corpora and applied handcrafted or machine-learned rules to acquire semantic relations (Herbelot and Copestake, 2006; Kazama and Torisawa, 2007; Ruiz-casado et al., 2005; Nastase and Strube, 2008; Sumida et al., 2008; Suchanek et al., 2007).
Related Work
Several researchers who participated in SemEval-07 (Girju et al., 2007) proposed methods for the classification of semantic relations between simple nominals in English sentences.
semantic relations is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Litkowski, Ken
Class Analyses
Srikumar and Roth (2013) broadened this perspective by considering a class-based approach by collapsing semantically-related senses across prepositions, thereby deriving a semantic relation inventory.
Class Analyses
While their emphasis was on modeling semantic relations , they achieved an accuracy of 83.53 percent for preposition disambiguation.
Class Analyses
As mentioned above, PDEP has a field for the Srikumar semantic relation , initially populated for the SemEval prepositions, and being extended to cover all other prepositions.
Introduction
Section 5 describes how we can use PDEP for the analysis of semantic role and semantic relation inventories.
See http://clg.wlv.ac.uk/proiects/DVC
A key element of Srikumar and Roth was the use of these classes to model semantic relations across prepositions (e.g., grouping all the Temporal senses of the SemEval prepositions).
The Pattern Dictionary of English Prepositions
In TPP, each sense was characterized with its complement and attachment (or governor) properties, its class and semantic relation , substitutable prepositions, its syntactic positions, and any FrameNet frame and frame element usages (where available).
semantic relations is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Özbal, Gözde and Pighin, Daniele and Strapparava, Carlo
Conclusion
Concerning the extension of the capabilities of BRAINSUP, we want to include commonsense knowledge and reasoning to profit from more sophisticated semantic relations and to inject humor on demand.
Evaluation
[YesMo]; 3) Relatedness: is the sentence semantically related to the target domain?
Evaluation
In other cases, such as “A sixth calorie may taste an own good” or “A same sunshine is fewer than a juice of day”, more sophisticated reasoning about syntactic and semantic relations in the output might be necessary in order to enforce the generation of sound and grammatical sentences.
Related work
(2011) slant existing textual expressions to obtain more positively or negatively valenced versions using WordNet (Miller, 1995) semantic relations and SentiWordNet (Esuli and Sebastiani, 2006) annotations.
Related work
Stock and Strapparava (2006) generate acronyms based on lexical substitution via semantic field opposition, rhyme, rythm and semantic relations .
Related work
(2012) attempt to generate novel poems by replacing words in existing poetry with morphologically compatible words that are semantically related to a target domain.
semantic relations is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Blanco, Eduardo and Moldovan, Dan
Approach to Semantic Representation of Negation
In this Section, we outline how to incorporate negation into semantic relations .
Approach to Semantic Representation of Negation
4.1 Semantic Relations
Approach to Semantic Representation of Negation
Semantic relations capture connections between concepts and label them according to their nature.
Introduction
Substantial progress has been made, though, especially on detection of semantic relations , ontologies and reasoning methods.
Introduction
Negation has been largely ignored within the area of semantic relations .
semantic relations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Sun, Le and Han, Xianpei
Abstract
However, the traditional syntactic tree representation is often too coarse or ambiguous to accurately capture the semantic relation information between two entities.
Introduction
1) The syntactic tree focuses on representing syntactic relation/structure, which is often too coarse or ambiguous to capture the semantic relation information.
Introduction
For example, all the three trees in Figure 1 share the same possessive syntactic structure, but express quite different semantic relations : where “Mary’s brothers” expresses PER-SOC Family relation, “Mary ’s toys” expresses Possession relation, and “New York’s airports” expresses PH YS-Located relation.
Introduction
better capture the semantic relation information between two entities.
semantic relations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Kennedy, Alistair and Szpakowicz, Stan
Comparison on applications
We compare the results for the 1911 and 1987 Roget’s Thesauri with a variety of WordNet-based semantic relatedness measures — see Table 5.
Comparison on applications
Other methods of determining sentence semantic relatedness expand term relatedness functions to
Introduction
We ran the well-established tasks of determining semantic relatedness of pairs of terms and identifying synonyms (J armasz and Szpakowicz, 2004).
Introduction
They propose a method of determining semantic relatedness between pairs of terms.
Introduction
Similar experiments were carried out using WordNet in combination with a variety of semantic relatedness functions.
semantic relations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Li, Peifeng and Zhu, Qiaoming and Zhou, Guodong
Inferring Inter-Sentence Arguments on Relevant Event Mentions
In this paper, a global argument inference model is proposed to infer those inter-sentence arguments and their roles, incorporating with semantic relations between relevant event mention pairs and argument semantics.
Inferring Inter-Sentence Arguments on Relevant Event Mentions
Therefore, employing those high level information to capture the semantic relation , not only the syntactic structure, between the trigger and its long distance arguments is the key to improve the performance of the Chinese argument identification.
Inferring Inter-Sentence Arguments on Relevant Event Mentions
Hence, the semantic relations among event mentions are helpful to be a bridge to identify those inter-sentence arguments.
Introduction
1) We propose a novel global argument inference model, in which various kinds of event relations are involved to infer more arguments on their semantic relations .
semantic relations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Pitler, Emily and Louis, Annie and Nenkova, Ani
Analysis of word pair features
Also note that the only two features predictive of the comparison class (indicated by * in Table l): the-it and to-it, contain only function words rather than semantically related non-function words.
Conclusion
We show that the features in fact do not capture semantic relation but rather give information about function word co-occurrences.
Word pair features in prior work
Semantic relations vs. function word pairs If the hypothesis for word pair triggers of discourse relations were true, the analysis of unambiguous relations can be used to discover pairs of words with causal or contrastive relations holding between them.
Word pair features in prior work
One approach for reducing the number of features follows the hypothesis of semantic relations between words.
semantic relations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Shnarch, Eyal and Barak, Libby and Dagan, Ido
Background
Many works on machine readable dictionaries utilized definitions to identify semantic relations between words (Ide and Jean, 1993).
Background
Ponzetto and Strube (2007) identified the subsumption (ISA) relation from Wikipedia’s category tags, while in Yago (Suchanek et al., 2007) these tags, redirect links and WordNet were used to identify instances of 14 predefined specific semantic relations .
Background
However this is a rather loose notion, which only indicates that terms are semantically “related” and are likely to co-occur with each other.
Extraction Methods Analysis
An examination of the paths in All-N reveals, beyond standard hyponymy and synonymy, various semantic relations that satisfy lexical reference, such as Location, Occupation and Creation, as illustrated in Table 3.
semantic relations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Hasan, Kazi Saidul and Ng, Vincent
Keyphrase Extraction Approaches
While the aforementioned external resource-based features attempt to encode how salient a candidate keyphrase is, Turney (2003) proposes features that encode the semantic relatedness between two candidate keyphrases.
Keyphrase Extraction Approaches
Noting that candidate keyphrases that are not semantically related to the predicted keyphrases are unlikely to be keyphrases in technical reports, Turney employs coherence features to identify such candidate keyphrases.
Keyphrase Extraction Approaches
Semantic relatedness is encoded in the coherence features as two candidate keyphrases’ pointwise mutual information, which Turney computes by using the Web as a corpus.
semantic relations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Poon, Hoifung
Grounded Unsupervised Semantic Parsing
In particular, dependency edges are often indicative of semantic relations .
Grounded Unsupervised Semantic Parsing
To combat this problem, GUSP introduces a novel dependency-based meaning representation with an augmented state space to account for semantic relations that are nonlocal in the dependency tree.
Grounded Unsupervised Semantic Parsing
GUSP only creates edge states for relational join paths up to length four, as longer paths rarely correspond to meaningful semantic relations .
Introduction
by augmenting the state space to represent semantic relations beyond immediate dependency neighborhood.
semantic relations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhong, Zhi and Ng, Hwee Tou
Incorporating Senses into Language Modeling Approaches
Words usually have some semantic relations with others.
Incorporating Senses into Language Modeling Approaches
Synonym relation is one of the semantic relations commonly used to improve IR performance.
Related Work
The utilization of semantic relations has proved to be helpful for IR.
Related Work
ing to investigate the utilization of semantic relations among senses in IR.
semantic relations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Ozbal, Gozde and Strapparava, Carlo
Related Work
HAHAcronym is mainly based on lexical substitution via semantic field opposition, rhyme, rhythm and semantic relations such as antonyms retrieved from WordNet (Stark and Riesenfeld, 1998) for adjectives.
System Description
The task that we deal with requires: 1) reasoning of relations between entities and concepts; 2) understanding the desired properties of entities determined by users; 3) identifying semantically related terms which are also consistent with the objectives of the advertisement; 4) finding terms which are suitable metaphors for the properties that need to be emphasized; 5) reasoning
System Description
4.3 Adding semantically related words
System Description
It should be noted that we do not consider any other statistical or knowledge based techniques for semantic relatedness .
semantic relations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Vannella, Daniele and Jurgens, David and Scarfini, Daniele and Toscani, Domenico and Navigli, Roberto
Experiments
Two experiments were performed with Infection and TKT: (1) an evaluation of players’ ability to play accurately and to validate semantic relations and image associations and (2) a comprehensive cost comparison.
Introduction
Semantic knowledge bases such as WordNet (Fellbaum, 1998), YAGO (Suchanek et al., 2007), and BabelNet (Navigli and Ponzetto, 2010) provide ontological structure that enables a wide range of tasks, such as measuring semantic relatedness (Budanitsky and Hirst, 2006) and similarity (Pilehvar et al., 2013), paraphrasing (Kauchak and Barzilay, 2006), and word sense disambiguation (Navigli and Ponzetto, 2012; Moro et al., 2014).
Introduction
semantic networks, using two games that operate on complementary sources of information: semantic relations and sense-image mappings.
Video Game with a Purpose Design
Second, BabelNet contains the semantic relations from both WordNet and hyperlinks in Wikipedia; these relations are again an ideal case of validation, as not all hyperlinks connect semantically-related pages in Wikipedia.
semantic relations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Wu, Fei and Weld, Daniel S.
Abstract
Information-extraction (IE) systems seek to distill semantic relations from natural-language text, but most systems use supervised learning of relation-specific examples and are thus limited by the availability of training data.
Abstract
Like TextRunner, WOE’s extractor eschews lexicalized features and handles an unbounded set of semantic relations .
Problem Definition
An open information extractor is a function from a document, d, to a set of triples, {(argl, rel, arg2>}, where the args are noun phrases and rel is a textual fragment indicating an implicit, semantic relation between the two noun phrases.
Wikipedia-based Open IE
WOEparse uses a pattern learner to classify whether the shortest dependency path between two noun phrases indicates a semantic relation .
semantic relations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Das, Dipanjan and Smith, Noah A.
Conclusion
We have shown that this model is competitive for determining whether there exists a semantic relationship between them, and can be improved by principled combination with more standard lexical overlap approaches.
QG for Paraphrase Modeling
WordNet relation(s) The model next chooses a lexical semantics relation between 3360-) and the yet-to-be-chosen word ti (line 12).
QG for Paraphrase Modeling
Word Finally, the target word is randomly chosen from among the set of words that bear the lexical semantic relationship just chosen (line 13).
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kalchbrenner, Nal and Grefenstette, Edward and Blunsom, Phil
Experiments
HIER NE, semantic relations
Introduction
Small filters at higher layers can capture syntactic or semantic relations between noncontinuous phrases that are far apart in the input sentence.
Properties of the Sentence Model
Likewise, the induced graph structure in a DCNN is more general than a parse tree in that it is not limited to syntactically dictated phrases; the graph structure can capture short or long-range semantic relations between words that do not necessarily correspond to the syntactic relations in a parse tree.
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Haghighi, Aria and Liang, Percy and Berg-Kirkpatrick, Taylor and Klein, Dan
Analysis
airport to aeropue rt 0 s), 30 were semantically related (e.g.
Analysis
Of the true errors, the most common arose from semantically related words which had strong context feature correlations (see table 4(b)).
Analysis
Here, the broad trend is for words which are either translations or semantically related across languages to be close in canonical space.
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Gormley, Matthew R. and Mitchell, Margaret and Van Durme, Benjamin and Dredze, Mark
Approaches
The label on the edge indicates the type of semantic relationship .
Approaches
Because each word in a sentence may be in a semantic relationship with any other word (including itself), a sentence of length n has n2 possible edges.
Approaches
In this way, we jointly perform identification (determining whether a semantic relationship exists) and classification (determining the semantic label).
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Wang, Chang and Fan, James
Background
To extract semantic relations from text, three types of approaches have been applied.
Introduction
Using question answering as an example (Wang et al., 2012): in question analysis, the semantic relations between the question focus and each term in the clue can be used to identify the weight of each term so that better search queries can be generated.
Introduction
In candidate answer scoring, relation-based matching algorithms can go beyond explicit lexical and syntactic information to detect implicit semantic relations shared across the question and passages.
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Tomeh, Nadi and Habash, Nizar and Roth, Ryan and Farra, Noura and Dasigi, Pradeep and Diab, Mona
Discriminative Reranking for OCR
To strike a balance between these two extremes, we introduce a novel model of semantic coherence that is based on a measure of semantic relatedness between pairs of words.
Discriminative Reranking for OCR
We model semantic relatedness between two words using the Information Content (IC) of the pair in a method similar to the one used by Lin (1997) and Lin (1998).
Discriminative Reranking for OCR
During testing, for each phrase in our test set, we measure semantic relatedness of pairs of words using the IC values estimated from the Arabic Gigaword, and normalize their sum by the number of pairs in the phrase to obtain a measure of Semantic Coherence (SC) of the phrase.
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Chen, Zhiyuan and Mukherjee, Arjun and Liu, Bing
AKL: Using the Learned Knowledge
We further employ the Generalized Plya urn (GPU) model (Mahmoud, 2008) which was shown to be effective in leveraging semantically related words (Chen et al., 2013a, Chen et al., 2013b, Mimno et al., 2011).
Learning Quality Knowledge
Each cluster contains semantically related topics likely to indicate the same real-world aspect.
Learning Quality Knowledge
Using two terms in a set is sufficient to cover the semantic relationship of the terms belonging to the same aspect.
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Baroni, Marco and Dinu, Georgiana and Kruszewski, Germán
Conclusion
To give just one last example, distributional semanticists have looked at whether certain properties of vectors reflect semantic relations in the expected way: e.g., whether the vectors of hypemyms “distribution-ally include” the vectors of hyponyms in some mathematical precise sense.
Conclusion
Does the structure of predict vectors mimic meaningful semantic relations ?
Evaluation materials
Semantic relatedness A first set of semantic benchmarks was constructed by asking human subjects to rate the degree of semantic similarity or relatedness between two words on a numerical scale.
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Somasundaran, Swapna and Wiebe, Janyce
Discussion
All of them are indeed semantically related to the domain.
Experiments
We find semantic relatedness of each noun in the post with the two main topics of the debate by calculating the Pointwise Mutual Information (PMI) between the term and each topic over the entire web corpus.
Experiments
We use the API provided by the Measures of Semantic Relatedness (MSR)4 engine for this purpose.
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Plank, Barbara and Moschitti, Alessandro
Abstract
Relation Extraction (RE) is the task of extracting semantic relationships between entities in text.
Introduction
Relation extraction is the task of extracting semantic relationships between entities in text, e.g.
Semantic Syntactic Tree Kernels
For instance, the fragments corresponding to governor from Texas and head of Maryland are intuitively semantically related and should obtain a higher match when compared to mother of them.
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Maxwell, K. Tamsin and Oberlander, Jon and Croft, W. Bruce
Introduction
These approaches are motivated by the idea that sentence meaning can be flexibly captured by the syntactic and semantic relations between words, and encoded in dependency parse tree fragments.
Introduction
and ‘level play’ do not have an important semantic relationship relative to the query, yet these catenae are described by parent-child relations that are commonly used to filter paths in text processing applications.
Related work
This is based on the observation that semantically related words have a variety of direct and indirect relations.
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Yang, Hui and Callan, Jamie
Introduction
Existing work on automatic taxonomy induction has been conducted under a variety of names, such as ontology learning, semantic class learning, semantic relation classification, and relation extraction.
Related Work
They have been applied to extract various types of lexical and semantic relations , including isa, part-of, sibling, synonym, causal, and many others.
The Features
The features used in this work are indicators of semantic relations between terms.
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Biran, Or and McKeown, Kathleen
Conclusion
With this approach, using a stop list does not have a major effect on results for most relation classes, which suggests most of the word pairs affecting performance are content word pairs which may truly be semantically related to the discourse structure.
Introduction
The intuition is that these pairs will tend to represent semantic relationships which are related to the discourse marker (for example, word pairs often appearing around but may tend to be antonyms).
Introduction
We show that our formulation outperforms the original one while requiring less features, and that using a stop list of functional words does not significantly affect performance, suggesting that these features indeed represent semantically related content word pairs.
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Hassan, Ahmed and Radev, Dragomir R.
Word Polarity
For example, the synonyms of any word are semantically related to it.
Word Polarity
The intuition behind that connecting semantically related words is that those words tend to have similar polarity.
Word Polarity
We construct a network where two nodes are linked if they are semantically related .
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Jurgens, David and Stevens, Keith
Algorithm Analysis
The variation in scoring illustrates that different algorithms are more effective at capturing certain semantic relations .
Benchmarks
Word association tests measure the semantic relatedness of two words by comparing word space similarity with human judgements.
Benchmarks
This test is notably more challenging for word space models because human ratings are not tied to a specific semantic relation .
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Bruni, Elia and Boleda, Gemma and Baroni, Marco and Tran, Nam Khanh
Abstract
Our results show that, while visual models with state-of-the-art computer vision techniques perform worse than textual models in general tasks (accounting for semantic relatedness ), they are as good or better models of the meaning of words with visual correlates such as color terms, even in a nontrivial task that involves nonliteral uses of such words.
Introduction
(2) We evaluate the models on general semantic relatedness tasks and on two specific tasks where visual information is highly relevant, as they focus on color terms.
Textual and visual models as general semantic models
Each pair is scored on a [0,1]—normalized semantic relatedness scale via ratings obtained by crowdsourcing on the Amazon Mechanical Turk (refer to the online MEN documentation for more details).
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kireyev, Kirill and Landauer, Thomas K
Rethinking Word Difficulty
The dimensionality reduction has the effect of smoothing out incidental co-occurrences and preserving significant semantic relationships between words.
Rethinking Word Difficulty
The resulting word vectors2 in U are positioned in such a way that semantically related words vectors point in similar directions or, equivalently, have higher cosine values between them.
Rethinking Word Difficulty
In addition to merely measuring semantic relatedness , LSA has been shown to emulate the learning of word meanings from natural language (as can be evidenced by a broad range of applications from synonym tests to automated essay grading), at rates that resemble those of human learners (Laundauer et al, 1997).
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Chan, Yee Seng and Roth, Dan
Introduction
RE has been frequently studied over the last few years as a supervised learning task, learning from spans of text that are annotated with a set of semantic relations of interest.
Syntactico-Semantic Structures
These four structures cover 80% of the mention pairs having valid semantic relations (we give the detailed breakdown in Section 7) and we show that they are relatively easy to identify using simple rules or patterns.
Syntactico-Semantic Structures
Preposition indicates that the two mentions are semantically related via the existence of a preposition, e.g.
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Mitchell, Jeff and Lapata, Mirella and Demberg, Vera and Keller, Frank
Introduction
The first type is semantic prediction, as evidenced in semantic priming: a word that is preceded by a semantically related prime or a semantically congruous sentence fragment is processed faster (Stanovich and West 1981; van Berkum et al.
Introduction
Comprehenders are faster at naming words that are syntactically compatible with prior context, even when they bear no semantic relationship to the context (Wright and Garrett 1984).
Models of Processing Difficulty
Distributional models of meaning have been commonly used to quantify the semantic relation between a word and its context in computational studies of lexical processing.
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: