Jointly Identifying Temporal Relations with Markov Logic
Yoshikawa, Katsumasa and Riedel, Sebastian and Asahara, Masayuki and Matsumoto, Yuji

Article Structure

Abstract

Recent work on temporal relation identification has focused on three types of relations between events: temporal relations between an event and a time expression, between a pair of events and between an event and the document creation time.

Introduction

Temporal relation identification (or temporal or dering) involves the prediction of temporal orde between events and/or time expressions mentionei in text, as well as the relation between events in document and the time at which the document wa created.

Temporal Relation Identification

Temporal relation identification aims to predict the temporal order of events and/or time expressions in documents, as well as their relations to the document creation time (DCT).

Markov Logic

It has long been clear that local classification alone cannot adequately solve all prediction problems we encounter in practice.5 This observation motivated a field within machine learning, often referred to as Statistical Relational Learning (SRL), which focuses on the incorporation of global correlations that hold between statistical variables (Getoor and Taskar, 2007).

Proposed Markov Logic Network

As stated before, our aim is to jointly tackle Tasks A, B and C of the TempEval challenge.

Experimental Setup

With our experiments we want to answer two questions: (1) does jointly tackling Tasks A, B, and C help to increase overall accuracy of temporal relation identification?

Results

In the following we will first present our comparison of the local and global model.

Conclusion

In this paper we presented a novel approach to temporal relation identification.

Topics

machine learning

Appears in 8 sentences as: machine learning (7) “Machine learning (1)
In Jointly Identifying Temporal Relations with Markov Logic
  1. By evaluating our model on the TempEval data we show that this approach leads to about 2% higher accuracy for all three types of relations —and to the best results for the task when compared to those of other machine learning based systems.
    Page 1, “Abstract”
  2. With the introduction of the TimeBank corpu (Pustejovsky et al., 2003), a set of documents an notated with temporal information, it became pos sible to apply machine learning to temporal order ing (Boguraev and Ando, 2005; Mani et al., 2006} These tasks have been regarded as essential fo complete document understanding and are usefu for a wide range of NLP applications such as ques tion answering and machine translation.
    Page 1, “Introduction”
  3. First, it allows us to use off-the-shelf machine learning software that, up until now, has been mostly focused on the case of local classifiers.
    Page 1, “Introduction”
  4. Hence, in our future work we can focus entirely on temporal relations, as opposed to inference or learning techniques for machine learning .
    Page 2, “Introduction”
  5. With the introduction of the TimeBank corpus (Pustejovsky et al., 2003), machine learning approaches to temporal ordering became possible.
    Page 2, “Temporal Relation Identification”
  6. Here one could argue that “the introduction of the TimeBank” may OVERLAP with “Machine learning becoming possible” because “introduction” can be understood as a process that is not finished with the release of the data but also includes later advertisements and announcements.
    Page 3, “Temporal Relation Identification”
  7. It has long been clear that local classification alone cannot adequately solve all prediction problems we encounter in practice.5 This observation motivated a field within machine learning , often referred to as Statistical Relational Learning (SRL), which focuses on the incorporation of global correlations that hold between statistical variables (Getoor and Taskar, 2007).
    Page 3, “Markov Logic”
  8. Note that all but the strict scores of Task C are achieved by WVALI (Puscasu, 2007), a hybrid system that combines machine learning and hand-coded rules.
    Page 7, “Results”

See all papers in Proc. ACL 2009 that mention machine learning.

See all papers in Proc. ACL that mention machine learning.

Back to top.

ILPs

Appears in 7 sentences as: ILP (3) ILPs (4)
In Jointly Identifying Temporal Relations with Markov Logic
  1. In order to repair the contradictions that the local classifier predicts, Chambers and Jurafsky (2008) proposed a global framework based on Integer Linear Programming ( ILP ).
    Page 1, “Introduction”
  2. Instead of combining the output of a set of local classifiers using ILP , we approach the problem of joint temporal relation identification using Markov Logic (Richardson and Domingos, 2006).
    Page 1, “Introduction”
  3. 2 In particular, we do not need to manually construct ILPs for each document we encounter.
    Page 2, “Introduction”
  4. 1It is clearly possible to incorporate weighted constraints into ILPs , but how to learn the corresponding weights is not obvious.
    Page 2, “Introduction”
  5. Surely it is possible to incorporate weighted constraints into ILPs , but how to learn the corresponding weights is not obvious.
    Page 5, “Proposed Markov Logic Network”
  6. 11 POS tagging is performed with TnT ver2.2;12 for our dependency-based features we use MaltParser 1.0.0.13 For inference in our models we use Cutting Plane Inference (Riedel, 2008) with ILP as a base solver.
    Page 6, “Experimental Setup”
  7. Second, there is less engineering overhead for us to perform, because we do not need to generate ILPs for each document.
    Page 8, “Conclusion”

See all papers in Proc. ACL 2009 that mention ILPs.

See all papers in Proc. ACL that mention ILPs.

Back to top.

best results

Appears in 4 sentences as: best results (4)
In Jointly Identifying Temporal Relations with Markov Logic
  1. By evaluating our model on the TempEval data we show that this approach leads to about 2% higher accuracy for all three types of relations —and to the best results for the task when compared to those of other machine learning based systems.
    Page 1, “Abstract”
  2. In comparison to other participants of the “TempEval” challenge our approach is very competitive: for two out of the three tasks we achieve the best results reported so far, by a margin of at least 2%.
    Page 2, “Introduction”
  3. We see that for task A, our global model improves an already strong local model to reach the best results both for strict scores (with a 3% points margin) and relaxed scores (with a 5% points margin).
    Page 7, “Results”
  4. We also achieve competitive relaxed scores which are in close range to the TempEval best results .
    Page 7, “Results”

See all papers in Proc. ACL 2009 that mention best results.

See all papers in Proc. ACL that mention best results.

Back to top.