Index of papers in Proc. ACL 2014 that mention
  • relation extraction
Chen, Liwei and Feng, Yansong and Huang, Songfang and Qin, Yong and Zhao, Dongyan
Abstract
Most existing relation extraction models make predictions for each entity pair locally and individually, while ignoring implicit global clues available in the knowledge base, sometimes leading to conflicts among local predictions from different entity pairs.
Abstract
Experimental results on three datasets, in both English and Chinese, show that our framework outperforms the state-of-the-art relation extraction models when such clues are applicable to the datasets.
Introduction
In the literature, relation extraction (RE) is usually investigated in a classification style, where relations are simply treated as isolated class labels, while their definitions or background information are sometimes ignored.
Introduction
On the other hand, most previous relation extractors process each entity pair (we will use entity pair and entity tuple exchangeably in the rest of the paper) locally and individually, i.e., the extractor makes decisions solely based on the sentences containing the current entity pair and ignores other related pairs, therefore has difficulties to capture possible disagreements among different entity pairs.
Introduction
In this paper, we will address how to derive and exploit two categories of these clues: the expected types and the cardinality requirements of a relation’s arguments, in the scenario of relation extraction .
Related Work
Since traditional supervised relation extraction methods (Soderland et al., 1995; Zhao and Gr-ishman, 2005) require manual annotations and are often domain-specific, nowadays many efforts focus on semi-supervised or unsupervised methods (Banko et al., 2007; Fader et al., 2011).
Related Work
To bridge the gaps between the relations extracted from open information extraction and the canonicalized relations in KBs, Yao et al.
The Framework
Since we will focus on the open domain relation extraction , we still follow the distant supervision paradigm to collect our training data guided by a KB, and train the local extractor accordingly.
The Framework
Traditionally, both lexical features and syntactic features are used in relation extraction .
The Framework
addition to lexical and syntactic features, we also use n-gram features to train our preliminary relation extraction model.
relation extraction is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Chen, Yanping and Zheng, Qinghua and Zhang, Wei
Abstract
In this paper, we propose an Omni—word feature and a soft constraint method for Chinese relation extraction .
Abstract
The results show a significant improvement in Chinese relation extraction , outperforming other methods in F-score by 10% in 6 relation types and 15% in 18 relation subtypes.
Introduction
The performance of relation extraction is still unsatisfactory with a F-score of 67.5% for English (23 subtypes) (Zhou et al., 2010).
Introduction
Chinese relation extraction also faces a weak performance having F-score about 66.6% in 18 subtypes (Dandan et al., 2012).
Introduction
Therefore, the Chinese relation extraction is more difficult.
Related Work
There are two paradigms extracting the relationship between two entities: the Open Relation Extraction (ORE) and the Traditional Relation Extraction (TRE) (Banko et al., 2008).
Related Work
In the field of Chinese relation extraction , Liu et al.
Related Work
(2008) experimented with different kernel methods and inferred that simply migrating from English kernel methods can result in a bad performance in Chinese relation extraction .
relation extraction is mentioned in 21 sentences in this paper.
Topics mentioned in this paper:
Li, Qi and Ji, Heng
Background
The entity mention extraction and relation extraction tasks we are addressing are those of the Automatic Content Extraction (ACE) program2.
Background
Most previous research on relation extraction assumed that entity mentions were given In this work we aim to address the problem of end-to-end entity mention and relation extraction from raw texts.
Background
In order to develop a baseline system representing state-of-the-art pipelined approaches, we trained a linear-chain Conditional Random Fields model (Lafferty et al., 2001) for entity mention extraction and a Maximum Entropy model for relation extraction .
Experiments
Most previous work on ACE relation extraction has reported results on ACE’04 data set.
Experiments
We use the standard F1 measure to evaluate the performance of entity mention extraction and relation extraction .
Experiments
Furthermore, we combine these two criteria to evaluate the performance of end-to-end entity mention and relation extraction .
Introduction
The goal of end-to-end entity mention and relation extraction is to discover relational structures of entity mentions from unstructured texts.
Introduction
This problem has been artificially broken down into several components such as entity mention boundary identification, entity type classification and relation extraction .
relation extraction is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Nguyen, Minh Luan and Tsang, Ivor W. and Chai, Kian Ming A. and Chieu, Hai Leong
Abstract
We propose a two-phase framework to adapt existing relation extraction classifiers to extract relations for new target domains.
Abstract
Our method outperforms numerous baselines and a weakly-supervised relation extraction method on ACE 2004 and YAGO.
Introduction
Recent work on relation extraction has demonstrated that supervised machine learning coupled with intelligent feature engineering can provide state-of-the-art performance (Jiang and Zhai, 2007b).
Introduction
Instead, it can be more cost-effective to adapt an existing relation extraction system to the new domain using a small set of labeled data.
Introduction
This paper considers relation adaptation, where a relation extraction system trained on many source domains is adapted to a new target domain.
Related Work
Relation extraction is usually considered a classification problem: determine if two given entities in a sentence have a given relation.
Related Work
However, purely supervised relation extraction methods assume the availability of sufficient labeled data, which may be costly to obtain for new domains.
Related Work
Bootstrapping methods (Zhu et al., 2009; Agichtein and Gravano, 2000; Xu et al., 2010; Pasca et al., 2006; Riloff and Jones, 1999) to relation extraction are attractive because they require fewer training instances than supervised approaches.
relation extraction is mentioned in 26 sentences in this paper.
Topics mentioned in this paper:
Nguyen, Thien Huu and Grishman, Ralph
Abstract
Relation extraction suffers from a performance loss when a model is applied to out-of-domain data.
Abstract
This has fostered the development of domain adaptation techniques for relation extraction .
Abstract
This paper evaluates word embeddings and clustering on adapting feature-based relation extraction systems.
Experiments
Our relation extraction system is hierarchical (Bunescu and Mooney, 2005b; Sun et a1., 2011) and apply maximum entropy (MaxEnt) in the MALLET3 toolkit as the machine 1eaming tool.
Introduction
The goal of Relation Extraction (RE) is to detect and classify relation mentions between entity pairs into predefined relation types such as Employ-mentor Citizenship relationships.
Introduction
The only study explicitly targeting this problem so far is by Plank and Moschitti (2013) who find that the out-of-domain performance of kemel-based relation extractors can be improved by embedding semantic similarity information generated from word clustering and latent semantic analysis (LSA) into syntactic tree kernels.
Introduction
We will demonstrate later that the adaptability of relation extractors can benefit significantly from the addition of word cluster
Regularization
Given the more general representations provided by word representations above, how can we learn a relation extractor from the labeled source domain data that generalizes well to new domains?
Regularization
Exploiting the shared interest in generalization performance with traditional machine learning, in domain adaptation for RE, we would prefer the relation extractor that fits the source domain data, but also circumvents the overfitting problem
relation extraction is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Qian, Longhua and Hui, Haotian and Hu, Ya'nan and Zhou, Guodong and Zhu, Qiaoming
Abstract
In the literature, the mainstream research on relation extraction adopts statistical machine learning methods, which can be grouped into supervised learning (Zelenko et al., 2003; Culotta and Soresen, 2004; Zhou et al., 2005; Zhang et al., 2006; Qian et al., 2008; Chan and Roth, 2011), semi-supervised learning (Zhang et al., 2004; Chen et al., 2006; Zhou et al., 2008; Qian et al., 2010) and unsupervised learning (Hase-gawa et al., 2004; Zhang et al., 2005) in terms of the amount of labeled training data they need.
Abstract
It is trivial to validate, as we will do later in this paper, that active learning can also alleviate the annotation burden for relation extraction in one language while retaining the extraction performance.
Abstract
However, there are cases when we may exploit relation extraction in multiple languages and there are corpora with relation instances annotated for more than one language, such as the ACE RDC 2005 English and Chinese corpora.
relation extraction is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Wang, Chang and Fan, James
Abstract
In this paper, we present a manifold model for medical relation extraction .
Background
2.2 Relation Extraction
Introduction
Relation extraction plays a key role in information extraction.
Introduction
To construct a medical relation extraction system, several challenges have to be addressed:
Introduction
The medical corpus underlying our relation extraction system contains 80M sentences (ll gigabytes pure text).
relation extraction is mentioned in 22 sentences in this paper.
Topics mentioned in this paper:
Fan, Miao and Zhao, Deli and Zhou, Qiang and Liu, Zhiyuan and Zheng, Thomas Fang and Chang, Edward Y.
Abstract
The essence of distantly supervised relation extraction is that it is an incomplete multi-label classification problem with sparse and noisy features.
Conclusion and Future Work
In this paper, we contributed two noise-tolerant optimization models”, DRMC-b and DRMC-l, for distantly supervised relation extraction task from a novel perspective.
Conclusion and Future Work
Our proposed models also leave open questions for distantly supervised relation extraction task.
Introduction
Relation Extraction (RE) is the process of generating structured relation knowledge from unstructured natural language texts.
Introduction
Figure 1: Training corpus generated by the basic alignment assumption of distantly supervised relation extraction .
Introduction
In essence, distantly supervised relation extraction is an incomplete multi-label classification task with sparse and noisy features.
Model
Our models for relation extraction are based on the theoretic framework proposed by Goldberg et al.
Related Work
11It is the abbreviation for Distant supervision for Relation extraction with Matrix Completion
Related Work
(2012) proposed a novel approach to multi-instance multi-label learning for relation extraction , which jointly modeled all the sentences in texts and all labels in knowledge bases for a given entity pair.
relation extraction is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Pershina, Maria and Min, Bonan and Xu, Wei and Grishman, Ralph
Abstract
Distant supervision usually utilizes only unlabeled data and existing knowledge bases to learn relation extraction models.
Available at http://nlp. stanford.edu/software/mimlre. shtml.
Thus, our approach outperforms state-of-the-art model for relation extraction using much less labeled data that was used by Zhang et al., (2012) to outper-
Conclusions and Future Work
We show that relation extractors trained with distant supervision can benefit significantly from a small number of human labeled examples.
Conclusions and Future Work
We show how to incorporate these guidelines into an existing state-of-art model for relation extraction .
Introduction
Relation extraction is the task of tagging semantic relations between pairs of entities from free text.
Introduction
Recently, distant supervision has emerged as an important technique for relation extraction and has attracted increasing attention because of its effective use of readily available databases (Mintz et al., 2009; Bunescu and Mooney, 2007; Snyder and Barzilay, 2007; Wu and Weld, 2007).
Introduction
1t Supervision for Relation Extraction
The Challenge
Conflicts cannot be limited to those cases where all the features in two examples are the same; this would almost never occur, because of the dozens of features used by a typical relation extractor (Zhou et al., 2005).
relation extraction is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Sun, Le and Han, Xianpei
Abstract
Tree kernel is an effective technique for relation extraction .
Introduction
Relation Extraction (RE) aims to identify a set of predefined relations between pairs of entities in text.
Introduction
In recent years, relation extraction has received considerable research attention.
Introduction
An effective technique is the tree kernel (Zelenko et al., 2003; Zhou et al., 2007; Zhang et al., 2006; Qian et al., 2008), which can exploit syntactic parse tree information for relation extraction .
relation extraction is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Lin, Chen and Miller, Timothy and Kho, Alvin and Bethard, Steven and Dligach, Dmitriy and Pradhan, Sameer and Savova, Guergana
Abstract
This method is evaluated on two temporal relation extraction tasks and demonstrates its advantage over rich syntactic representations.
Background
2.2 Temporal Relation Extraction
Background
Among NLP tasks that use syntactic information, temporal relation extraction has been drawing growing attention because of its wide applications in multiple domains.
Background
Many methods exist for synthesizing syntactic information for temporal relation extraction , and most use traditional tree kernels with various feature representations.
Conclusion
Future work will explore 1) a composite kernel which uses DPK for PET trees, SST for BT and PT, and feature kernel for flat features, so that different tree kernels can work with their ideal syntactic representations; 2) incorporate dependency structures for tree kernel analysis 3) applying DPK to other relation extraction tasks on various corpora.
Evaluation
We applied DPK to two published temporal relation extraction systems: (Miller et al., 2013) in the clinical domain and Cleartk—TimeML (Bethard, 2013) in the general domain respectively.
Evaluation
Table 2: Comparison of tree kernel performance for temporal relation extraction on THYME and TempEval-2013 data.
relation extraction is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Krishnamurthy, Jayant and Mitchell, Tom M.
Discussion
The parser is trained by jointly optimizing performance on a syntactic parsing task and a distantly-supervised relation extraction task.
Experiments
Using the relation instances and Wikipedia sentences, we constructed a data set for distantly-supervised relation extraction .
Experiments
Comparing against this parser lets us measure the effect of the relation extraction task on syntactic parsing.
Introduction
Our parser is trained by combining a syntactic parsing task with a distantly-supervised relation extraction task.
Parameter Estimation
Training is performed by minimizing a joint objective function combining a syntactic parsing task and a distantly-supervised relation extraction task.
Parameter Estimation
The syntactic component Osyn is a standard syntactic parsing objective constructed using the syntactic resource L. The semantic component Osem is a distantly-supervised relation extraction task based on the semantic constraint from Krishnamurthy and Mitchell (2012).
Parameter Estimation
The semantic objective corresponds to a distantly-supervised relation extraction task that constrains the logical forms produced by the semantic parser.
relation extraction is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Li, Jiwei and Ritter, Alan and Hovy, Eduard
Introduction
Concretely, we cast user profile prediction as binary relation extraction (Brin, 1999), e.g., SPOUSE(User,—, Userj), EDUCATION(User,—, Entityj) and EMPLOYER(Userz-, Entityj).
Related Work
Rather than relying on mention-level annotations, which are expensive and time consuming to generate, distant supervision leverages readily available structured data sources as a weak source of supervision for relation extraction from related text corpora (Craven et al., 1999).
Related Work
In addition to the wide use in text entity relation extraction (Mintz et al., 2009; Ritter et al., 2013; Hoffmann et al., 2011; Surdeanu et al., 2012; Takamatsu et al., 2012), distant supervision has been applied to multiple
Related Work
fields such as protein relation extraction (Craven et al., 1999; Ravikumar et al., 2012), event extraction from Twitter (Benson et al., 2011), sentiment analysis (Go et al., 2009) and Wikipedia infobox generation (Wu and Weld, 2007).
relation extraction is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Hovy, Dirk
Introduction
The results indicate that the learned types can be used to in relation extraction tasks.
Model
Our goal is to find semantic type candidates in the data, and apply them in relation extraction to see which ones are best suited.
Related Work
In relation extraction , we have to identify the relation elements, and then map the arguments to types.
relation extraction is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: