Index of papers in Proc. ACL that mention
  • relation extraction
Nguyen, Thien Huu and Grishman, Ralph
Abstract
Relation extraction suffers from a performance loss when a model is applied to out-of-domain data.
Abstract
This has fostered the development of domain adaptation techniques for relation extraction .
Abstract
This paper evaluates word embeddings and clustering on adapting feature-based relation extraction systems.
Experiments
Our relation extraction system is hierarchical (Bunescu and Mooney, 2005b; Sun et a1., 2011) and apply maximum entropy (MaxEnt) in the MALLET3 toolkit as the machine 1eaming tool.
Introduction
The goal of Relation Extraction (RE) is to detect and classify relation mentions between entity pairs into predefined relation types such as Employ-mentor Citizenship relationships.
Introduction
The only study explicitly targeting this problem so far is by Plank and Moschitti (2013) who find that the out-of-domain performance of kemel-based relation extractors can be improved by embedding semantic similarity information generated from word clustering and latent semantic analysis (LSA) into syntactic tree kernels.
Introduction
We will demonstrate later that the adaptability of relation extractors can benefit significantly from the addition of word cluster
Regularization
Given the more general representations provided by word representations above, how can we learn a relation extractor from the labeled source domain data that generalizes well to new domains?
Regularization
Exploiting the shared interest in generalization performance with traditional machine learning, in domain adaptation for RE, we would prefer the relation extractor that fits the source domain data, but also circumvents the overfitting problem
relation extraction is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Takamatsu, Shingo and Sato, Issei and Nakagawa, Hiroshi
Abstract
In relation extraction , distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision.
Abstract
In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction .
Experiments
Experiment 2 aimed to evaluate how much our wrong label reduction in Section 4 improved the performance of relation extraction .
Introduction
Machine learning approaches have been developed to address relation extraction , which is the task of extracting semantic relations between entities expressed in text.
Introduction
0 We applied our method to Wikipedia articles using Freebase as a knowledge base and found that (i) our model identified patterns expressing a given relation more accurately than baseline methods and (ii) our method led to better extraction performance than the original DS (Mintz et al., 2009) and MultiR (Hoffmann et al., 2011), which is a state-of-the-art multi-instance learning system for relation extraction (see Section 7).
Knowledge-based Distant Supervision
In this section, we describe DS for relation extraction .
Knowledge-based Distant Supervision
Relation extraction seeks to extract relation instances from text.
Knowledge-based Distant Supervision
DS uses a knowledge base to create labeled data for relation extraction by heuristically matching entity pairs.
Related Work
(2009) who used Freebase as a knowledge base by making the DS assumption and trained relation extractors on Wikipedia.
Related Work
Bootstrapping for relation extraction (Riloff and Jones, 1999; Pantel and Pennacchiotti, 2006; Carlson et al., 2010) is related to our method.
Wrong Label Reduction
For relation extraction , we train a classifier for entity pairs using the resultant labeled data.
relation extraction is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Kim, Seokhwan and Lee, Gary Geunbae
Abstract
Although researchers have conducted extensive studies on relation extraction in the last decade, supervised approaches are still limited because they require large amounts of training data to achieve high performances.
Abstract
To build a relation extractor without significant annotation effort, we can exploit cross-lingual annotation projection, which leverages parallel corpora as external resources for supervision.
Abstract
This paper proposes a novel graph-based projection approach and demonstrates the merits of it by using a Korean relation extraction system based on projected dataset from an English—Korean parallel corpus.
Introduction
Relation extraction aims to identify semantic relations of entities in a document.
Introduction
Although many supervised machine learning approaches have been successfully applied to relation extraction tasks (Ze-lenko et al., 2003; Kambhatla, 2004; Bunescu and Mooney, 2005; Zhang et al., 2006), applications of these approaches are still limited because they require a sufficient number of training examples to obtain good extraction results.
Introduction
Although these datasets encourage the development of relation extractors for these major languages, there are few labeled training samples for learning new systems in
relation extraction is mentioned in 23 sentences in this paper.
Topics mentioned in this paper:
Garrido, Guillermo and Peñas, Anselmo and Cabaleiro, Bernardo and Rodrigo, Álvaro
Abstract
Although much work on relation extraction has aimed at obtaining static facts, many of the target relations are actually flaents, as their validity is naturally anchored to a certain time period.
Abstract
This paper proposes a methodological approach to temporally anchored relation extraction .
Abstract
Results show that our implementation for temporal anchoring is able to achieve a 69% of the upper bound performance imposed by the relation extraction step.
Distant Supervised Relation Extraction
To perform relation extraction , our proposal follows a distant supervision approach (Mintz et al., 2009), which has also inspired other slot filling systems (Agirre et al., 2009; Surdeanu et al., 2010).
Evaluation
Our system was one of the five that took part in the task.We have evaluated the overall system and the two main components of the architecture: Relation Extraction , and Temporal Anchoring of the relations.
Evaluation
6.1 Evaluation of Relation Extraction
Introduction
As pointed out in (Ling and Weld, 2010), while much research in automatic relation extraction has focused on distilling static facts from text, many of the target relations are in fact flaents, dynamic relations whose truth value is dependent on time (Russell and Norvig, 2010).
Introduction
The Temporally anchored relation extraction problem consists in, given a natural language text document corpus, C, a target entity, 6, and a target
Introduction
ed Relation Extraction
relation extraction is mentioned in 24 sentences in this paper.
Topics mentioned in this paper:
Li, Qi and Ji, Heng
Background
The entity mention extraction and relation extraction tasks we are addressing are those of the Automatic Content Extraction (ACE) program2.
Background
Most previous research on relation extraction assumed that entity mentions were given In this work we aim to address the problem of end-to-end entity mention and relation extraction from raw texts.
Background
In order to develop a baseline system representing state-of-the-art pipelined approaches, we trained a linear-chain Conditional Random Fields model (Lafferty et al., 2001) for entity mention extraction and a Maximum Entropy model for relation extraction .
Experiments
Most previous work on ACE relation extraction has reported results on ACE’04 data set.
Experiments
We use the standard F1 measure to evaluate the performance of entity mention extraction and relation extraction .
Experiments
Furthermore, we combine these two criteria to evaluate the performance of end-to-end entity mention and relation extraction .
Introduction
The goal of end-to-end entity mention and relation extraction is to discover relational structures of entity mentions from unstructured texts.
Introduction
This problem has been artificially broken down into several components such as entity mention boundary identification, entity type classification and relation extraction .
relation extraction is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Sun, Ang and Grishman, Ralph and Sekine, Satoshi
Abstract
We present a simple semi-supervised relation extraction system with large-scale word clustering.
Background
3.1 Relation Extraction
Background
One of the well defined relation extraction tasks is the Automatic Content Extraction1 (ACE) program sponsored by the U.S. government.
Introduction
Relation extraction is an important information extraction task in natural language processing (NLP), with many practical applications.
Introduction
The goal of relation extraction is to detect and characterize semantic relations between pairs of entities in text.
Introduction
For example, a relation extraction system needs to be able to extract an Employment relation between the entities US soldier and US in the phrase US soldier.
Related Work
A second difference between this work and the above ones is that we utilize word clusters in the task of relation extraction which is very different from sequence labeling tasks such as name tagging and chunking.
Related Work
(2005) and Chan and Roth (2010) used word clusters in relation extraction , they shared the same limitation as the above approaches in choosing clusters.
relation extraction is mentioned in 22 sentences in this paper.
Topics mentioned in this paper:
Chen, Yanping and Zheng, Qinghua and Zhang, Wei
Abstract
In this paper, we propose an Omni—word feature and a soft constraint method for Chinese relation extraction .
Abstract
The results show a significant improvement in Chinese relation extraction , outperforming other methods in F-score by 10% in 6 relation types and 15% in 18 relation subtypes.
Introduction
The performance of relation extraction is still unsatisfactory with a F-score of 67.5% for English (23 subtypes) (Zhou et al., 2010).
Introduction
Chinese relation extraction also faces a weak performance having F-score about 66.6% in 18 subtypes (Dandan et al., 2012).
Introduction
Therefore, the Chinese relation extraction is more difficult.
Related Work
There are two paradigms extracting the relationship between two entities: the Open Relation Extraction (ORE) and the Traditional Relation Extraction (TRE) (Banko et al., 2008).
Related Work
In the field of Chinese relation extraction , Liu et al.
Related Work
(2008) experimented with different kernel methods and inferred that simply migrating from English kernel methods can result in a bad performance in Chinese relation extraction .
relation extraction is mentioned in 21 sentences in this paper.
Topics mentioned in this paper:
Nguyen, Minh Luan and Tsang, Ivor W. and Chai, Kian Ming A. and Chieu, Hai Leong
Abstract
We propose a two-phase framework to adapt existing relation extraction classifiers to extract relations for new target domains.
Abstract
Our method outperforms numerous baselines and a weakly-supervised relation extraction method on ACE 2004 and YAGO.
Introduction
Recent work on relation extraction has demonstrated that supervised machine learning coupled with intelligent feature engineering can provide state-of-the-art performance (Jiang and Zhai, 2007b).
Introduction
Instead, it can be more cost-effective to adapt an existing relation extraction system to the new domain using a small set of labeled data.
Introduction
This paper considers relation adaptation, where a relation extraction system trained on many source domains is adapted to a new target domain.
Related Work
Relation extraction is usually considered a classification problem: determine if two given entities in a sentence have a given relation.
Related Work
However, purely supervised relation extraction methods assume the availability of sufficient labeled data, which may be costly to obtain for new domains.
Related Work
Bootstrapping methods (Zhu et al., 2009; Agichtein and Gravano, 2000; Xu et al., 2010; Pasca et al., 2006; Riloff and Jones, 1999) to relation extraction are attractive because they require fewer training instances than supervised approaches.
relation extraction is mentioned in 26 sentences in this paper.
Topics mentioned in this paper:
Chen, Liwei and Feng, Yansong and Huang, Songfang and Qin, Yong and Zhao, Dongyan
Abstract
Most existing relation extraction models make predictions for each entity pair locally and individually, while ignoring implicit global clues available in the knowledge base, sometimes leading to conflicts among local predictions from different entity pairs.
Abstract
Experimental results on three datasets, in both English and Chinese, show that our framework outperforms the state-of-the-art relation extraction models when such clues are applicable to the datasets.
Introduction
In the literature, relation extraction (RE) is usually investigated in a classification style, where relations are simply treated as isolated class labels, while their definitions or background information are sometimes ignored.
Introduction
On the other hand, most previous relation extractors process each entity pair (we will use entity pair and entity tuple exchangeably in the rest of the paper) locally and individually, i.e., the extractor makes decisions solely based on the sentences containing the current entity pair and ignores other related pairs, therefore has difficulties to capture possible disagreements among different entity pairs.
Introduction
In this paper, we will address how to derive and exploit two categories of these clues: the expected types and the cardinality requirements of a relation’s arguments, in the scenario of relation extraction .
Related Work
Since traditional supervised relation extraction methods (Soderland et al., 1995; Zhao and Gr-ishman, 2005) require manual annotations and are often domain-specific, nowadays many efforts focus on semi-supervised or unsupervised methods (Banko et al., 2007; Fader et al., 2011).
Related Work
To bridge the gaps between the relations extracted from open information extraction and the canonicalized relations in KBs, Yao et al.
The Framework
Since we will focus on the open domain relation extraction , we still follow the distant supervision paradigm to collect our training data guided by a KB, and train the local extractor accordingly.
The Framework
Traditionally, both lexical features and syntactic features are used in relation extraction .
The Framework
addition to lexical and syntactic features, we also use n-gram features to train our preliminary relation extraction model.
relation extraction is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Qian, Longhua and Hui, Haotian and Hu, Ya'nan and Zhou, Guodong and Zhu, Qiaoming
Abstract
In the literature, the mainstream research on relation extraction adopts statistical machine learning methods, which can be grouped into supervised learning (Zelenko et al., 2003; Culotta and Soresen, 2004; Zhou et al., 2005; Zhang et al., 2006; Qian et al., 2008; Chan and Roth, 2011), semi-supervised learning (Zhang et al., 2004; Chen et al., 2006; Zhou et al., 2008; Qian et al., 2010) and unsupervised learning (Hase-gawa et al., 2004; Zhang et al., 2005) in terms of the amount of labeled training data they need.
Abstract
It is trivial to validate, as we will do later in this paper, that active learning can also alleviate the annotation burden for relation extraction in one language while retaining the extraction performance.
Abstract
However, there are cases when we may exploit relation extraction in multiple languages and there are corpora with relation instances annotated for more than one language, such as the ACE RDC 2005 English and Chinese corpora.
relation extraction is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Plank, Barbara and Moschitti, Alessandro
Abstract
Relation Extraction (RE) is the task of extracting semantic relationships between entities in text.
Abstract
Recent studies on relation extraction are mostly supervised.
Abstract
In this paper, we propose to combine (i) term generalization approaches such as word clustering and latent semantic analysis (LSA) and (ii) structured kernels to improve the adaptability of relation extractors to new text genres/domains.
Computational Structures for RE
(2006), including gold-standard information on entity and mention type substantially improves relation extraction performance.
Experimental Setup
We treat relation extraction as a multi-class classification problem and use SVM-light-TK4 to train the binary classifiers.
Introduction
Relation extraction is the task of extracting semantic relationships between entities in text, e.g.
Introduction
Recent studies on relation extraction have shown that supervised approaches based on either feature or kernel methods achieve state-of-the-art accuracy (Ze-lenko et al., 2002; Culotta and Sorensen, 2004;
Introduction
However, to the best of our knowledge, there is almost no work on adapting relation extraction (RE) systems to new domains.1 There are some prior studies on the related tasks of malti-task transfer learning (Xu et al., 2008; Jiang, 2009) and distant supervision (Mintz et al., 2009), which are clearly related but different: the former is the problem of how to transfer knowledge from old to new relation types, while distant supervision tries to learn new relations from unlabeled text by exploiting weak-supervision in the form of a knowledge resource (e.g.
Related Work
Thus, we present a novel application of semantic syntactic tree kernels and Brown clusters for domain adaptation of tree-kernel based relation extraction .
relation extraction is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Yan, Yulan and Okazaki, Naoaki and Matsuo, Yutaka and Yang, Zhenglu and Ishizuka, Mitsuru
Abstract
This paper presents an unsupervised relation extraction method for discovering and enhancing relations in which a specified concept in Wikipedia participates.
Introduction
Machine learning approaches for relation extraction tasks require substantial human effort, particularly when applied to the broad range of documents, entities, and relations existing on the Web.
Introduction
Linguistic analysis is another effective technology for semantic relation extraction , as described in many reports such as (Kambhatla, 2004); (Bunescu and Mooney, 2005); (Harabagiu et al., 2005); (Nguyen et al., 2007).
Introduction
Currently, linguistic approaches for semantic relation extraction are mostly supervised, relying on pre-specification of the desired relation or initial seed words or patterns from hand-coding.
Related Work
(Rosenfeld and Feldman, 2006) showed that the clusters discovered by URI are useful for seeding a semi-supervised relation extraction system.
Related Work
In this paper, we propose an unsupervised relation extraction method that combines patterns of two types: surface patterns and dependency patterns.
Related Work
Surface patterns are generated from the Web corpus to provide redundancy information for relation extraction .
relation extraction is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Wang, Chang and Fan, James
Abstract
In this paper, we present a manifold model for medical relation extraction .
Background
2.2 Relation Extraction
Introduction
Relation extraction plays a key role in information extraction.
Introduction
To construct a medical relation extraction system, several challenges have to be addressed:
Introduction
The medical corpus underlying our relation extraction system contains 80M sentences (ll gigabytes pure text).
relation extraction is mentioned in 22 sentences in this paper.
Topics mentioned in this paper:
Jiang, Jing
Abstract
Creating labeled training data for relation extraction is expensive.
Abstract
In this paper, we study relation extraction in a special weakly-supervised setting when we have only a few seed instances of the target relation type we want to extract but we also have a large amount of labeled instances of other relation types.
Abstract
Observing that different relation types can share certain common structures, we propose to use a multitask learning method coupled with human guidance to address this weakly-supervised relation extraction problem.
Introduction
Relation extraction is the task of detecting and characterizing semantic relations between entities from free text.
Introduction
Recent work on relation extraction has shown that supervised machine learning coupled with intelligent feature engineering or kernel design provides state-of-the-art solutions to the problem (Culotta and Sorensen, 2004; Zhou et al., 2005; Bunescu and Mooney, 2005; Qian et al., 2008).
Introduction
While transfer learning was proposed more than a decade ago (Thrun, 1996; Caruana, 1997), its application in natural language processing is still a relatively new territory (Blitzer et al., 2006; Daume III, 2007; J iang and Zhai, 2007a; Arnold et al., 2008; Dredze and Crammer, 2008), and its application in relation extraction is still unexplored.
Related work
Recent work on relation extraction has been dominated by feature-based and kernel-based supervised learning methods.
Related work
(2005) and Zhao and Grishman (2005) studied various features and feature combinations for relation extraction .
Related work
We systematically explored the feature space for relation extraction (Jiang and Zhai, 2007b) .
relation extraction is mentioned in 25 sentences in this paper.
Topics mentioned in this paper:
Yang, Bishan and Cardie, Claire
Experiments
We adopted the evaluation metrics for entity and relation extraction from Choi et al.
Experiments
We trained the classifiers for relation extraction using L1-regu1arized logistic regression with default parameters using the LIBLINEAR (Fan et al., 2008) package.
Experiments
Three relation extraction techniques were used in the baselines:
Introduction
2007; Yang and Cardie, 2012)) and relation extraction techniques have been proposed to extract opinion holders and targets based on their linking relations to the opinion expressions (e. g. Kim and Hovy (2006), Kobayashi et al.
Introduction
We model entity identification as a sequence tagging problem and relation extraction as binary classification.
Model
In this section, we will describe how we model opinion entity identification and opinion relation extraction , and how we combine them in a joint inference model.
Model
3.2 Opinion Relation Extraction
Model
In the following we will not distinguish these two relations, since they can both be characterized as relations between opinion expressions and opinion arguments, and the methods for relation extraction are the same.
relation extraction is mentioned in 20 sentences in this paper.
Topics mentioned in this paper:
Fan, Miao and Zhao, Deli and Zhou, Qiang and Liu, Zhiyuan and Zheng, Thomas Fang and Chang, Edward Y.
Abstract
The essence of distantly supervised relation extraction is that it is an incomplete multi-label classification problem with sparse and noisy features.
Conclusion and Future Work
In this paper, we contributed two noise-tolerant optimization models”, DRMC-b and DRMC-l, for distantly supervised relation extraction task from a novel perspective.
Conclusion and Future Work
Our proposed models also leave open questions for distantly supervised relation extraction task.
Introduction
Relation Extraction (RE) is the process of generating structured relation knowledge from unstructured natural language texts.
Introduction
Figure 1: Training corpus generated by the basic alignment assumption of distantly supervised relation extraction .
Introduction
In essence, distantly supervised relation extraction is an incomplete multi-label classification task with sparse and noisy features.
Model
Our models for relation extraction are based on the theoretic framework proposed by Goldberg et al.
Related Work
11It is the abbreviation for Distant supervision for Relation extraction with Matrix Completion
Related Work
(2012) proposed a novel approach to multi-instance multi-label learning for relation extraction , which jointly modeled all the sentences in texts and all labels in knowledge bases for a given entity pair.
relation extraction is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Li, Zhifei and Yarowsky, David
Conclusions
Our method exploits the data co-occarrence phenomena that is very useful for relation extractions .
Experimental Results
Table 4: F all-abbreviation Relation Extraction Precision
Experimental Results
To further show the advantage of our relation extraction algorithm (see Section 3.3), in the third column of Table 4 we report the results on a simple baseline.
Experimental Results
As shown in Table 4, the baseline performs significantly worse than our relation extraction algorithm.
Unsupervised Translation Induction for Chinese Abbreviations
3.3 F all-abbreviation Relation Extraction from Chinese Monolingual Corpora
Unsupervised Translation Induction for Chinese Abbreviations
3.3.2 F all-abbreviation Relation Extraction Algorithm
Unsupervised Translation Induction for Chinese Abbreviations
Figure 2 presents the pseudocode of the full-abbreviation relation extraction algorithm.
relation extraction is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Banko, Michele and Etzioni, Oren
Abstract
raditional Relation Extraction
Conclusions and Future Work
We also plan to explore the capacity of Open IE to automatically provide labeled training data, when traditional relation extraction is a more appropriate choice.
Hybrid Relation Extraction
4.2 Stacked Relation Extraction
Introduction
Relation Extraction (RE) is the task of recognizing the assertion of a particular relationship between two or more entities in text.
Relation Extraction
Given a relation name, labeled examples of the relation, and a corpus, traditional Relation Extraction (RE) systems output instances of the given relation found in the corpus.
Relation Extraction
Figure 1: Relation Extraction as Sequence Labeling: A CRF is used to identify the relationship, born in, between Kafka and Prague
Relation Extraction
Linear-chain CRFs have been applied to a variety of sequential text processing tasks including named-entity recognition, part-of—speech tagging, word segmentation, semantic role identification, and recently relation extraction (Culotta et al., 2006).
The Nature of Relations in English
In this section, we show that many relationships are consistently expressed using a compact set of relation-independent lexico-syntactic patterns, and quantify their frequency based on a sample of 500 sentences selected at random from an IE training corpus developed by (Bunescu and Mooney, 2007).1 This observation helps to explain the success of open relation extraction , which learns a relation-independent extraction model as described in Section 3.1.
relation extraction is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Branavan, S.R.K. and Kushman, Nate and Lei, Tao and Barzilay, Regina
Abstract
In this paper, we express the semantics of precondition relations extracted from text in terms of planning operations.
Abstract
When applied to a complex virtual world and text describing that world, our relation extraction technique performs on par with a supervised baseline, yielding an F-measure of 66% compared to the baseline’s 65%.
Conclusions
While using planning feedback as its only source of supervision, our method for relation extraction achieves a performance on par with that of a supervised baseline.
Experimental Setup
Evaluation Metrics We use our manual annotations to evaluate the type-level accuracy of relation extraction .
Experimental Setup
Baselines To evaluate the performance of our relation extraction , we compare against an SVM classifier8 trained on the Gold Relations.
Introduction
The central idea of our work is to express the semantics of precondition relations extracted from text in terms of planning operations.
Introduction
We build on the intuition that the validity of precondition relations extracted from text can be informed by the execution of a low-level planner.3 This feedback can enable us to learn these relations without annotations.
Introduction
Our results demonstrate the strength of our relation extraction technique — while using planning feedback as its only source of supervision, it achieves a precondition relation extraction accuracy on par with that of a supervised SVM baseline.
Results
Relation Extraction Figure 5 shows the performance of our method on identifying preconditions in text.
relation extraction is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Pershina, Maria and Min, Bonan and Xu, Wei and Grishman, Ralph
Abstract
Distant supervision usually utilizes only unlabeled data and existing knowledge bases to learn relation extraction models.
Available at http://nlp. stanford.edu/software/mimlre. shtml.
Thus, our approach outperforms state-of-the-art model for relation extraction using much less labeled data that was used by Zhang et al., (2012) to outper-
Conclusions and Future Work
We show that relation extractors trained with distant supervision can benefit significantly from a small number of human labeled examples.
Conclusions and Future Work
We show how to incorporate these guidelines into an existing state-of-art model for relation extraction .
Introduction
Relation extraction is the task of tagging semantic relations between pairs of entities from free text.
Introduction
Recently, distant supervision has emerged as an important technique for relation extraction and has attracted increasing attention because of its effective use of readily available databases (Mintz et al., 2009; Bunescu and Mooney, 2007; Snyder and Barzilay, 2007; Wu and Weld, 2007).
Introduction
1t Supervision for Relation Extraction
The Challenge
Conflicts cannot be limited to those cases where all the features in two examples are the same; this would almost never occur, because of the dozens of features used by a typical relation extractor (Zhou et al., 2005).
relation extraction is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Sun, Le and Han, Xianpei
Abstract
Tree kernel is an effective technique for relation extraction .
Introduction
Relation Extraction (RE) aims to identify a set of predefined relations between pairs of entities in text.
Introduction
In recent years, relation extraction has received considerable research attention.
Introduction
An effective technique is the tree kernel (Zelenko et al., 2003; Zhou et al., 2007; Zhang et al., 2006; Qian et al., 2008), which can exploit syntactic parse tree information for relation extraction .
relation extraction is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Lin, Chen and Miller, Timothy and Kho, Alvin and Bethard, Steven and Dligach, Dmitriy and Pradhan, Sameer and Savova, Guergana
Abstract
This method is evaluated on two temporal relation extraction tasks and demonstrates its advantage over rich syntactic representations.
Background
2.2 Temporal Relation Extraction
Background
Among NLP tasks that use syntactic information, temporal relation extraction has been drawing growing attention because of its wide applications in multiple domains.
Background
Many methods exist for synthesizing syntactic information for temporal relation extraction , and most use traditional tree kernels with various feature representations.
Conclusion
Future work will explore 1) a composite kernel which uses DPK for PET trees, SST for BT and PT, and feature kernel for flat features, so that different tree kernels can work with their ideal syntactic representations; 2) incorporate dependency structures for tree kernel analysis 3) applying DPK to other relation extraction tasks on various corpora.
Evaluation
We applied DPK to two published temporal relation extraction systems: (Miller et al., 2013) in the clinical domain and Cleartk—TimeML (Bethard, 2013) in the general domain respectively.
Evaluation
Table 2: Comparison of tree kernel performance for temporal relation extraction on THYME and TempEval-2013 data.
relation extraction is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Bhagat, Rahul and Ravichandran, Deepak
Abstract
We further show that we can use these paraphrases to generate surface patterns for relation extraction .
Conclusion
While we believe that more work needs to be done to improve the system recall (some of which we are investigating), this seems to be a good first step towards developing a minimally supervised, easy to implement, and scalable relation extraction system.
Experimental Methodology
5.3 Relation Extraction
Experimental Results
Relation Extraction
Experimental Results
Relation Extraction
Experimental Results
Moving to the task of relation extraction , we see from table 5 that our system has a much lower relative recall compared to the baseline.
Introduction
Claim 2: These paraphrases can then be used for generating high precision surface patterns for relation extraction .
Related Work
Another task related to our work is relation extraction .
relation extraction is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Krishnamurthy, Jayant and Mitchell, Tom M.
Discussion
The parser is trained by jointly optimizing performance on a syntactic parsing task and a distantly-supervised relation extraction task.
Experiments
Using the relation instances and Wikipedia sentences, we constructed a data set for distantly-supervised relation extraction .
Experiments
Comparing against this parser lets us measure the effect of the relation extraction task on syntactic parsing.
Introduction
Our parser is trained by combining a syntactic parsing task with a distantly-supervised relation extraction task.
Parameter Estimation
Training is performed by minimizing a joint objective function combining a syntactic parsing task and a distantly-supervised relation extraction task.
Parameter Estimation
The syntactic component Osyn is a standard syntactic parsing objective constructed using the syntactic resource L. The semantic component Osem is a distantly-supervised relation extraction task based on the semantic constraint from Krishnamurthy and Mitchell (2012).
Parameter Estimation
The semantic objective corresponds to a distantly-supervised relation extraction task that constrains the logical forms produced by the semantic parser.
relation extraction is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Alfonseca, Enrique and Filippova, Katja and Delort, Jean-Yves and Garrido, Guillermo
Conclusions
We have described a new distant supervision model with which to learn patterns for relation extraction with no manual intervention.
Experiments and results
In the case of nationality, however, even though the extracted sentences do not support the relation (P@50 = 0.34 for intertext), the new relations extracted are mostly correct (P@50 = 0.86) as most presidents and ministers in the real world have the nationality of the country where they govern.
Introduction
Open Information Extraction (Sekine, 2006; Banko et al., 2007; Bollegala et al., 2010) started as an effort to approach relation extraction in
Introduction
A different family of unsupervised methods for relation extraction is unsupervised semantic parsing, which aims at clustering entity mentions and relation surface forms, thus generating a semantic representation of the texts on which inference may be used.
Introduction
The main contribution of this work is presenting a variant of distance supervision for relation extraction where we do not use heuristics in the selection of the training data.
Unsupervised relational pattern learning
Figure 1: Example of a generated set of document collections from a news corpus for relation extraction .
relation extraction is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Mintz, Mike and Bills, Steven and Snow, Rion and Jurafsky, Daniel
Abstract
Modem models of relation extraction for tasks like ACE are based on supervised learning of relations from small hand-labeled corpora.
Introduction
Supervised relation extraction suffers from a number of problems, however.
Introduction
Our algorithm uses Freebase (Bollacker et al., 2008), a large semantic database, to provide distant supervision for relation extraction .
Introduction
cal (word sequence) features in relation extraction .
Previous work
Except for the unsupervised algorithms discussed above, previous supervised or bootstrapping approaches to relation extraction have typically relied on relatively small datasets, or on only a small number of distinct relations.
Previous work
Many early algorithms for relation extraction used little or no syntactic information.
relation extraction is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Wu, Fei and Weld, Daniel S.
Related Work
(Mintz et al., 2009) uses Freebase to provide distant supervision for relation extraction .
Related Work
They applied a similar heuristic by matching Freebase tuples with unstructured sentences (Wikipedia articles in their experiments) to create features for learning relation extractors .
Related Work
(Akbik and BroB, 2009) annotated 10,000 sentences parsed with LinkGrammar and selected 46 general linkpaths as patterns for relation extraction .
Wikipedia-based Open IE
noted in (de Marneffe and Manning, 2008), this collapsed format often yields simplified patterns which are useful for relation extraction .
relation extraction is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Hoffmann, Raphael and Zhang, Congle and Weld, Daniel S.
Introduction
This paper presents LUCHS, an autonomous, self-supervised system, which learns 5025 relational extractors — an order of magnitude greater than any previous effort.
Introduction
In order to handle sparsity in its heuristically- generated training data, LUCHS generates custom lexicon features when learning each relational extractor .
Introduction
Our experiments demonstrate a high Fl score, 61%, across the 5025 relational extractors learned.
Learning Extractors
We therefore choose a hierarchical approach that combines both article classifiers and relation extractors .
Learning Extractors
is likely to contain a schema, does LUCHS run that schema’s relation extractors .
relation extraction is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Rokhlenko, Oleg and Szpektor, Idan
Comparable Question Mining
3.1 Comparable Relation Extraction
Comparable Question Mining
An important observation for the task of comparable relation extraction is that many relations are complex multiword expressions, and thus their automatic detection is not trivial.
Comparable Question Mining
ger (Lafferty et al., 2001) to the task, since CRF was shown to be state-of-the-art for sequential relation extraction (Mooney and Bunescu, 2005; Culotta et al., 2006; J indal and Liu, 2006).
Related Work
Our extraction of comparable relations falls within the field of Relation Extraction , in which CRF is a state-of-the-art method (Mooney and Bunescu, 2005; Culotta et al., 2006).
relation extraction is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Yao, Limin and Riedel, Sebastian and McCallum, Andrew
Experiments
Rel-LDA: Generative models have been successfully applied to unsupervised relation extraction (Rink and Harabagiu, 2011; Yao et al., 2011).
Introduction
Relation extraction (RE) is the task of determining semantic relations between entities mentioned in text.
Introduction
Here, the relation extractor simultaneously discovers facts expressed in natural language, and the ontology into which they are assigned.
Related Work
Many generative probabilistic models have been applied to relation extraction .
relation extraction is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Li, Jiwei and Ritter, Alan and Hovy, Eduard
Introduction
Concretely, we cast user profile prediction as binary relation extraction (Brin, 1999), e.g., SPOUSE(User,—, Userj), EDUCATION(User,—, Entityj) and EMPLOYER(Userz-, Entityj).
Related Work
Rather than relying on mention-level annotations, which are expensive and time consuming to generate, distant supervision leverages readily available structured data sources as a weak source of supervision for relation extraction from related text corpora (Craven et al., 1999).
Related Work
In addition to the wide use in text entity relation extraction (Mintz et al., 2009; Ritter et al., 2013; Hoffmann et al., 2011; Surdeanu et al., 2012; Takamatsu et al., 2012), distant supervision has been applied to multiple
Related Work
fields such as protein relation extraction (Craven et al., 1999; Ravikumar et al., 2012), event extraction from Twitter (Benson et al., 2011), sentiment analysis (Go et al., 2009) and Wikipedia infobox generation (Wu and Weld, 2007).
relation extraction is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Hoffmann, Raphael and Zhang, Congle and Ling, Xiao and Zettlemoyer, Luke and Weld, Daniel S.
Abstract
Knowledge-based weak supervision, using structured data to heuristically label a training corpus, works towards this goal by enabling the automated learning of a potentially unbounded number of relation extractors .
Conclusion
automatically learn a nearly unbounded number of relational extractors .
Related Work
(2009) used Freebase facts to train 100 relational extractors on Wikipedia.
Related Work
Bunescu and Mooney (2007) connect weak supervision with multi-instance learning and extend their relational extraction kernel to this context.
relation extraction is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Chan, Yee Seng and Roth, Dan
Abstract
In this paper, we observe that there exists a second dimension to the relation extraction (RE) problem that is orthogonal to the relation type dimension.
Introduction
Relation extraction (RE) has been defined as the task of identifying a given set of semantic binary relations in text.
Introduction
In this paper we build on the observation that there exists a second dimension to the relation extraction problem that is orthogonal to the relation type dimension: all relation types are expressed in one of several constrained syntactico-semantic structures.
Introduction
In the next section, we describe our relation extraction framework that leverages the syntactico-semantic structures.
relation extraction is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Wang, WenTing and Su, Jian and Tan, Chew Lim
Conclusions and Future Works
Dependency Tree Kernel for Relation Extraction .
Conclusions and Future Works
Kernel Methods for Relation Extraction .
Conclusions and Future Works
Exploring Syntactic Features for Relation Extraction using a Convolution Tree Kernel.
Related Work
Indeed, using kernel methods to mine structural knowledge has shown success in some NLP applications like parsing (Collins and Duffy, 2001; Moschitti, 2004) and relation extraction (Zelenko et al., 2003; Zhang et al., 2006).
relation extraction is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
He, Hua and Barbosa, Denilson and Kondrak, Grzegorz
Conclusion and Future Work
In order to deduce the speaker of the utterance, we need to combine the three pieces of information: (a) the utterance is addressed to Lizzy (vocative prediction), (b) the utterance is produced by Lizzy’s father (pronoun resolution), and (c) Mr. Bennet is the father of Lizzy ( relationship extraction ).
Conclusion and Future Work
A joint approach to resolving speaker attribution, relationship extraction , co-reference resolution, and alias-to-character mapping would not only improve the accuracy on all these tasks, but also represent a step towards deeper understanding of complex plots and stories.
Extracting Family Relationships
A preliminary manual inspection of the set of relations extracted by this method (Makazhanov et al., 2012) indicates that all of them are correct, and include about 40% all personal relations that can be inferred by a human reader from the text of the novel.
relation extraction is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Cai, Qingqing and Yates, Alexander
Previous Work
We also leverage synonym-matching techniques for comparing relations extracted from text with Freebase relations.
Previous Work
Our techniques for comparing relations fit into this line of work, but they are novel in their application of these techniques to the task of comparing database relations and relations extracted from text.
Previous Work
Schema matching in the database sense often considers complex matches between relations (Dhamanka et al., 2004), whereas as our techniques are currently restricted to matches involving one database relation and one relation extracted from text.
relation extraction is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Hovy, Dirk
Introduction
The results indicate that the learned types can be used to in relation extraction tasks.
Model
Our goal is to find semantic type candidates in the data, and apply them in relation extraction to see which ones are best suited.
Related Work
In relation extraction , we have to identify the relation elements, and then map the arguments to types.
relation extraction is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Krishnamurthy, Jayant and Mitchell, Tom
Abstract
ConceptResolver performs both word sense induction and synonym resolution on relations extracted from text using an ontology and a small amount of labeled data.
Introduction
The relations extracted by systems like NELL actually apply to concepts, not to noun phrases.
Prior Work
Synonym resolution on relations extracted from web text has been previously studied by Resolver (Yates and Etzioni, 2007), which finds synonyms in relation triples extracted by TextRunner (Banko et al., 2007).
relation extraction is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: