Temporally Anchored Relation Extraction
Garrido, Guillermo and Peñas, Anselmo and Cabaleiro, Bernardo and Rodrigo, Álvaro

Article Structure

Abstract

Although much work on relation extraction has aimed at obtaining static facts, many of the target relations are actually flaents, as their validity is naturally anchored to a certain time period.

Introduction

A question that arises when extracting a relation is how to capture its temporal validity: Can we assign a period of time when the obtained relation held?

Temporal Anchors

We will denominate relation instance a triple (entity, relation name, value).

Document Representation

We use a rich document representation that employs a graph structure obtained by augmenting the syntactic dependency analysis of the document with semantic information.

Distant Supervised Relation Extraction

To perform relation extraction, our proposal follows a distant supervision approach (Mintz et al., 2009), which has also inspired other slot filling systems (Agirre et al., 2009; Surdeanu et al., 2010).

Temporal Anchoring of Relations

In this section, we propose and discuss a unified methodological approach for temporal anchoring of relations.

Evaluation

We have used for our evaluation the dataset compiled within the TAC-KBP 2011 Temporal Slot Filling Task (Ji et al., 2011).

Related Work

Compiling a Knowledge Base of temporally anchored facts is an open research challenge (Weikum et al., 2011).

Conclusions

This paper introduces the problem of extracting, from unrestricted natural language text, relational knowledge anchored to a temporal span, aggregating temporal evidence from a collection of documents.

Topics

relation extraction

Appears in 24 sentences as: Relation Extraction (5) relation extraction (19)
In Temporally Anchored Relation Extraction
  1. Although much work on relation extraction has aimed at obtaining static facts, many of the target relations are actually flaents, as their validity is naturally anchored to a certain time period.
    Page 1, “Abstract”
  2. This paper proposes a methodological approach to temporally anchored relation extraction .
    Page 1, “Abstract”
  3. Results show that our implementation for temporal anchoring is able to achieve a 69% of the upper bound performance imposed by the relation extraction step.
    Page 1, “Abstract”
  4. As pointed out in (Ling and Weld, 2010), while much research in automatic relation extraction has focused on distilling static facts from text, many of the target relations are in fact flaents, dynamic relations whose truth value is dependent on time (Russell and Norvig, 2010).
    Page 1, “Introduction”
  5. The Temporally anchored relation extraction problem consists in, given a natural language text document corpus, C, a target entity, 6, and a target
    Page 1, “Introduction”
  6. ed Relation Extraction
    Page 1, “Introduction”
  7. I ' (5) Relation Extraction :' ' ' ' i N l ' I | I | W i : Input: Quew relation |_ _ _ unlabelled _ _ _ _. entity instances candidate I x ' j _ _ _ _ _ | f .
    Page 2, “Introduction”
  8. For a query entity and target relation, the system first performs relation extraction (section 4); then, we find and aggregate time constraint evidence for the same relation across different documents, to establish a temporal validity anchor interval (section 5).
    Page 2, “Introduction”
  9. To perform relation extraction , our proposal follows a distant supervision approach (Mintz et al., 2009), which has also inspired other slot filling systems (Agirre et al., 2009; Surdeanu et al., 2010).
    Page 3, “Distant Supervised Relation Extraction”
  10. Our system was one of the five that took part in the task.We have evaluated the overall system and the two main components of the architecture: Relation Extraction , and Temporal Anchoring of the relations.
    Page 5, “Evaluation”
  11. 6.1 Evaluation of Relation Extraction
    Page 5, “Evaluation”

See all papers in Proc. ACL 2012 that mention relation extraction.

See all papers in Proc. ACL that mention relation extraction.

Back to top.

relation instance

Appears in 11 sentences as: Relation instance (1) relation instance (8) relation instances (1) relational instance (1)
In Temporally Anchored Relation Extraction
  1. We will denominate relation instance a triple (entity, relation name, value).
    Page 2, “Temporal Anchors”
  2. We aim at anchoring relation instances to their temporal validity.
    Page 2, “Temporal Anchors”
  3. Let us assume that each relation instance is valid during a certain temporal interval, I = [750, if].
    Page 2, “Temporal Anchors”
  4. Relation instance extraction.
    Page 3, “Distant Supervised Relation Extraction”
  5. Given an input entity and a target relation, we aim at finding a filler value for a relation instance .
    Page 3, “Distant Supervised Relation Extraction”
  6. For each of the relations to extract, a binary classifier (extractor) decides whether the example is a valid relation instance .
    Page 4, “Distant Supervised Relation Extraction”
  7. We assume the input is a relation instance and a set of supporting documents.
    Page 4, “Temporal Anchoring of Relations”
  8. For each document and relational instance , we have to select those temporal expressions that are relevant.
    Page 4, “Temporal Anchoring of Relations”
  9. Now, the mapping of temporal constraints depends on the temporal link to the time expression identified; also, the semantics of the event have to be considered in order to decide the time period associated to a relation instance .
    Page 5, “Temporal Anchoring of Relations”
  10. Second, the distant supervision assumption underlying our approach is that for a seed relation instance (entity, relation, value), any textual mention of entity and value expresses the relation.
    Page 6, “Evaluation”
  11. Under the evaluation metrics proposed by TAC-KBP 2011, if the value of the relation instance is judged as correct, the score for temporal anchoring depends on how well the returned interval matches the one provided in the key.
    Page 6, “Evaluation”

See all papers in Proc. ACL 2012 that mention relation instance.

See all papers in Proc. ACL that mention relation instance.

Back to top.

knowledge base

Appears in 7 sentences as: Knowledge Base (2) knowledge base (3) knowledge bases (2)
In Temporally Anchored Relation Extraction
  1. From a reference Knowledge Base (KB), we extract a set of relation triples or seeds: (entity,relati0n,value), where the relation is one of the target relations.
    Page 3, “Distant Supervised Relation Extraction”
  2. It has been shown that this assumption is more often violated when training knowledge base and document collection are of different type, e. g. Wikipedia and newswire (Riedel et al., 2010).
    Page 6, “Evaluation”
  3. Compiling a Knowledge Base of temporally anchored facts is an open research challenge (Weikum et al., 2011).
    Page 7, “Related Work”
  4. There have been attempts to extend an existing knowledge base .
    Page 7, “Related Work”
  5. While ACE required only to identify time expressions and classify their relation to events, KBP requires to infer explicitly the start/end time of relations, which is a realistic approach in the context of building time-aware knowledge bases .
    Page 8, “Related Work”
  6. Although compiling time-aware knowledge bases is an important open challenge (Weikum et al., 2011), it has remained unexplored until very recently (Wang et al., 2011; Talukdar et al., 2012).
    Page 8, “Conclusions”
  7. We have also studied the limits of the distant supervision approach to relation extraction, showing empirically that its performance depends not only on the nature of reference knowledge base and document corpus (Riedel et al., 2010), but also on the relation to be extracted.
    Page 8, “Conclusions”

See all papers in Proc. ACL 2012 that mention knowledge base.

See all papers in Proc. ACL that mention knowledge base.

Back to top.

distant supervision

Appears in 6 sentences as: distant supervision (6)
In Temporally Anchored Relation Extraction
  1. Our system (see Figure l) extracts relational facts from text using distant supervision (Mintz et al., 2009) and then anchors the relation to an interval of temporal validity.
    Page 1, “Introduction”
  2. To perform relation extraction, our proposal follows a distant supervision approach (Mintz et al., 2009), which has also inspired other slot filling systems (Agirre et al., 2009; Surdeanu et al., 2010).
    Page 3, “Distant Supervised Relation Extraction”
  3. Our document-level distant supervision assumption is that if entity and value are found in a document graph (see section 3), and there is a path connecting them, then the document expresses the relation.
    Page 3, “Distant Supervised Relation Extraction”
  4. Second, the distant supervision assumption underlying our approach is that for a seed relation instance (entity, relation, value), any textual mention of entity and value expresses the relation.
    Page 6, “Evaluation”
  5. We have also studied the limits of the distant supervision approach to relation extraction, showing empirically that its performance depends not only on the nature of reference knowledge base and document corpus (Riedel et al., 2010), but also on the relation to be extracted.
    Page 8, “Conclusions”
  6. Given a relation between two arguments, if it is not dominant among textual expressions of those arguments, the distant supervision assumption will be more often violated.
    Page 8, “Conclusions”

See all papers in Proc. ACL 2012 that mention distant supervision.

See all papers in Proc. ACL that mention distant supervision.

Back to top.

natural language

Appears in 5 sentences as: natural language (5)
In Temporally Anchored Relation Extraction
  1. Our proposal performs distant supervised learning to extract a set of relations from a natural language corpus, and anchors each of them to an interval of temporal validity, aggregating evidence from documents supporting the relation.
    Page 1, “Abstract”
  2. The Temporally anchored relation extraction problem consists in, given a natural language text document corpus, C, a target entity, 6, and a target
    Page 1, “Introduction”
  3. This sharp temporal interval fails to capture the imprecision of temporal boundaries conveyed in natural language text.
    Page 2, “Temporal Anchors”
  4. their relation to events in natural language , the complete problem of temporally anchored relation extraction remains relatively unexplored.
    Page 7, “Related Work”
  5. This paper introduces the problem of extracting, from unrestricted natural language text, relational knowledge anchored to a temporal span, aggregating temporal evidence from a collection of documents.
    Page 8, “Conclusions”

See all papers in Proc. ACL 2012 that mention natural language.

See all papers in Proc. ACL that mention natural language.

Back to top.

coreference

Appears in 4 sentences as: Coreference (1) coreference (2) coreferent (1)
In Temporally Anchored Relation Extraction
  1. 0 Coreference : indicates that two chunks refer to
    Page 2, “Document Representation”
  2. The processing includes dependency parsing, named entity recognition and coreference resolution, done with the Stanford CoreNLP software (Klein and Manning, 2003); and events and temporal information extraction, via the TARSQI Toolkit (Verhagen et al., 2005).
    Page 3, “Document Representation”
  3. Each node of GO clusters together coreferent nodes, representing a discourse referent.
    Page 3, “Document Representation”
  4. The coreference edges do not appear in this representation.
    Page 3, “Document Representation”

See all papers in Proc. ACL 2012 that mention coreference.

See all papers in Proc. ACL that mention coreference.

Back to top.

named entity

Appears in 4 sentences as: Named Entities (1) Named Entity (1) named entity (2)
In Temporally Anchored Relation Extraction
  1. There are three families of types: Events (verbs that describe an action, annotated with tense, polarity and aspect); standardized Time Expressions; and Named Entities , with additional annotations such as gender or age.
    Page 2, “Document Representation”
  2. 1Most chunks consist in one word; we join words into a chunk (and a node) in two cases: a multi—word named entity and a verb and its auxiliaries.
    Page 2, “Document Representation”
  3. The processing includes dependency parsing, named entity recognition and coreference resolution, done with the Stanford CoreNLP software (Klein and Manning, 2003); and events and temporal information extraction, via the TARSQI Toolkit (Verhagen et al., 2005).
    Page 3, “Document Representation”
  4. We enforce the Named Entity type of entity and value to match a expected type, predefined for the relation.
    Page 3, “Distant Supervised Relation Extraction”

See all papers in Proc. ACL 2012 that mention named entity.

See all papers in Proc. ACL that mention named entity.

Back to top.

gold standard

Appears in 3 sentences as: gold standard (3)
In Temporally Anchored Relation Extraction
  1. This gold standard contains the correct responses pooled from the participant systems plus a set of responses manually found by annotators.
    Page 5, “Evaluation”
  2. More precisely, let the correct imprecise anchor interval in the gold standard key be Sk 2 (1111, kg, k3, 11:4) and the system response be S = (r1, r2, r3, T4).
    Page 6, “Evaluation”
  3. If the gold standard contains N responses, and the system output M responses, then precision is: P = Q(r) /M, and recall:
    Page 6, “Evaluation”

See all papers in Proc. ACL 2012 that mention gold standard.

See all papers in Proc. ACL that mention gold standard.

Back to top.