Index of papers in Proc. ACL 2012 that mention
  • semantic relations
Kim, Seokhwan and Lee, Gary Geunbae
Conclusions
Experimental results show that our graph-based projection helped to improve the performance of the cross-lingual annotation projection of the semantic relations , and our system outperforms the other systems, which incorporate monolingual external resources.
Cross-lingual Annotation Projection for Relation Extraction
Although some noise reduction strategies for projecting semantic relations were proposed (Kim et al., 2010), the direct projection approach is still vulnerable to erroneous inputs generated by submodules.
Graph Construction
Graph construction for projecting semantic relationships is more complicated than part-of-speech tagging because the unit instance of projection is a pair of entities and not a word or morpheme that is equivalent to the alignment unit.
Graph Construction
The larger the y+ value, the more likely the instance has a semantic relationship .
Graph Construction
The other type of vertices, context vertices, are used for identifying relation descriptors that are contextual subtexts that represent semantic relationships of the positive instances.
Implementation
Table 1: Comparison between direct and graph-based projection approaches to extract semantic relationships for four relation types
Introduction
Relation extraction aims to identify semantic relations of entities in a document.
Introduction
Several datasets that provide manual annotations of semantic relationships are available from MUC (Grishman and Sund-heim, 1996) and ACE (Doddington et al., 2004) projects, but these datasets contain labeled training examples in only a few major languages, including English, Chinese, and Arabic.
Introduction
Because manual annotation of semantic relations for such resource-poor languages is very expensive, we instead consider weakly supervised learning techniques (Riloff and Jones, 1999; Agichtein and Gravano, 2000; Zhang, 2004; Chen et al., 2006) to learn the relation extractors without significant annotation efforts.
semantic relations is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Yao, Limin and Riedel, Sebastian and McCallum, Andrew
Abstract
We merge these sense clusters into semantic relations using hierarchical agglomerative clustering.
Evaluations
Pairwise metrics measure how often two tuples which are clustered in one semantic relation are labeled with the same Freebase label.
Experiments
DIRT calculates distributional similarities between different paths to find paths which bear the same semantic relation .
Introduction
Relation extraction (RE) is the task of determining semantic relations between entities mentioned in text.
Our Approach
We induce pattern senses by clustering the entity pairs associated with a pattern, and discover semantic relations by clustering these sense clusters.
Our Approach
We take each sense cluster of a pattern as an atomic cluster, and use hierarchical agglomerative clustering to organize them into semantic relations .
Our Approach
Therefore, a semantic relation comprises a set of sense clusters of patterns.
Related Work
Both use distributional similarity to find patterns representing similar semantic relations .
semantic relations is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Ozbal, Gozde and Strapparava, Carlo
Related Work
HAHAcronym is mainly based on lexical substitution via semantic field opposition, rhyme, rhythm and semantic relations such as antonyms retrieved from WordNet (Stark and Riesenfeld, 1998) for adjectives.
System Description
The task that we deal with requires: 1) reasoning of relations between entities and concepts; 2) understanding the desired properties of entities determined by users; 3) identifying semantically related terms which are also consistent with the objectives of the advertisement; 4) finding terms which are suitable metaphors for the properties that need to be emphasized; 5) reasoning
System Description
4.3 Adding semantically related words
System Description
It should be noted that we do not consider any other statistical or knowledge based techniques for semantic relatedness .
semantic relations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhong, Zhi and Ng, Hwee Tou
Incorporating Senses into Language Modeling Approaches
Words usually have some semantic relations with others.
Incorporating Senses into Language Modeling Approaches
Synonym relation is one of the semantic relations commonly used to improve IR performance.
Related Work
The utilization of semantic relations has proved to be helpful for IR.
Related Work
ing to investigate the utilization of semantic relations among senses in IR.
semantic relations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Bruni, Elia and Boleda, Gemma and Baroni, Marco and Tran, Nam Khanh
Abstract
Our results show that, while visual models with state-of-the-art computer vision techniques perform worse than textual models in general tasks (accounting for semantic relatedness ), they are as good or better models of the meaning of words with visual correlates such as color terms, even in a nontrivial task that involves nonliteral uses of such words.
Introduction
(2) We evaluate the models on general semantic relatedness tasks and on two specific tasks where visual information is highly relevant, as they focus on color terms.
Textual and visual models as general semantic models
Each pair is scored on a [0,1]—normalized semantic relatedness scale via ratings obtained by crowdsourcing on the Amazon Mechanical Turk (refer to the online MEN documentation for more details).
semantic relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: