Toward Future Scenario Generation: Extracting Event Causality Exploiting Semantic Relation, Context, and Association Features
Hashimoto, Chikara and Torisawa, Kentaro and Kloetzer, Julien and Sano, Motoki and Varga, István and Oh, Jong-Hoon and Kidawara, Yutaka

Article Structure

Abstract

We propose a supervised method of extracting event causalities like conduct slash-and—barn agriculture—>exacerbate desertification from the web using semantic relation (between nouns), context, and association features.

Introduction

The world can be seen as a network of causality where people, organizations, and other kinds of entities causally depend on each other.

Related Work

For event causality extraction, clues used by previous methods can roughly be categorized as lexico-syntactic patterns (Abe et al., 2008; Radinsky et al., 2012), words in context (Oh et al., 2013), associations among words (Torisawa, 2006; Riaz and Girju, 2010; Do et al., 2011), and predicate semantics (Hashimoto et al., 2012).

Event Causality Extraction Method

This section describes our event causality extraction method.

Future Scenario Generation Method

Our future scenario generation method creates scenarios by chaining event causalities.

Experiments

5.1 Event Causality Extraction

Conclusion

We proposed a supervised method for event causality extraction that exploits semantic relation, context, and association features.

Topics

semantic relations

Appears in 25 sentences as: Semantic Relation (1) Semantic relation (4) semantic relation (6) Semantic relations (1) semantic relations (14)
In Toward Future Scenario Generation: Extracting Event Causality Exploiting Semantic Relation, Context, and Association Features
  1. We propose a supervised method of extracting event causalities like conduct slash-and—barn agriculture—>exacerbate desertification from the web using semantic relation (between nouns), context, and association features.
    Page 1, “Abstract”
  2. slash-and—burn agriculture and desertification) that take some specific binary semantic relations (e.g.
    Page 1, “Introduction”
  3. Note that semantic relations are not restricted to those directly relevant to causality like A CAUSES B but can be those that might seem irrelevant to causality like A IS AN INGREDIENT FOR B (e.g.
    Page 1, “Introduction”
  4. Our underlying intuition is the observation that event causality tends to hold between two entities linked by semantic relations which roughly entail that one entity strongly affects the other.
    Page 1, “Introduction”
  5. Such semantic relations can be expressed by (otherwise unintuitive) patterns like A IS AN INGREDIENT FOR B.
    Page 1, “Introduction”
  6. As such, semantic relations like the MATERIAL relation can also be useful.
    Page 1, “Introduction”
  7. Besides features similar to those described above, we propose semantic relation features3 that include those that are not obviously related to causality.
    Page 2, “Related Work”
  8. (2012) used semantic relations to generalize acquired causality instances.
    Page 2, “Related Work”
  9. 3.2.1 Semantic Relation Features
    Page 3, “Event Causality Extraction Method”
  10. We hypothesize that two nouns with some particular semantic relations are more likely to constitute event causality.
    Page 3, “Event Causality Extraction Method”
  11. Below we describe the semantic relations that we believe are likely to constitute event causality.
    Page 3, “Event Causality Extraction Method”

See all papers in Proc. ACL 2014 that mention semantic relations.

See all papers in Proc. ACL that mention semantic relations.

Back to top.

phrase pairs

Appears in 11 sentences as: phrase pair (1) phrase pairs (13)
In Toward Future Scenario Generation: Extracting Event Causality Exploiting Semantic Relation, Context, and Association Features
  1. Annotators regarded as event causality only phrase pairs that were interpretable as event causality without contexts (i.e., self-contained).
    Page 2, “Introduction”
  2. An event causality candidate is given a causality score 0 8 core, which is the SVM score (distance from the hyperplane) that is normalized to [0,1] by the sigmoid function Each event causality candidate may be given multiple original sentences, since a phrase pair can appear in multiple sentences, in which case it is given more than one SVM score.
    Page 5, “Event Causality Extraction Method”
  3. A naive approach chains two phrase pairs by exact matching.
    Page 5, “Future Scenario Generation Method”
  4. Scenarios (scs) generated by chaining causally-compatible phrase pairs are scored by Score(sc), which embodies our assumption that an acceptable scenario consists of plausible event causality pairs:
    Page 6, “Future Scenario Generation Method”
  5. These three datasets have no overlap in terms of phrase pairs .
    Page 6, “Experiments”
  6. We observed that CEAsup and CEAWS performed poorly and tended to favor event causality candidates whose phrase pairs were highly relevant to each other but described the contrasts of events rather than event causality (e. g. build a slow muscle and build a fast muscle) probably because their
    Page 7, “Experiments”
  7. phrase pairs described two events that often happen in parallel but are not event causality (e. g. reduce the intake of energy and increase the energy consumption) in the highly ranked event causality candidates of Csuns and Cssup.
    Page 8, “Experiments”
  8. However, as described in Section 1, our event causality criteria are different; since they regarded phrase pairs that were not self-contained as event causality (their annotators checked the original sentences of phrase pairs to see if they were event causality), their judgments tended to be more lenient than ours, which explains the performance difference.
    Page 8, “Experiments”
  9. Event causality We applied our event causality extraction method to 2,451,254 candidates (Section 3.1) and culled the top 1,200,000 phrase pairs from them (See Section F in the supplementary notes for examples).
    Page 8, “Experiments”
  10. Some phrase pairs have the same noun pairs and the same template polarity pairs (e.g.
    Page 8, “Experiments”
  11. We removed such phrase pairs except those with the highest CScore, and 960,561 phrase pairs remained, from which we generated two- or three-step scenarios that consisted of two or three phrase pairs .
    Page 8, “Experiments”

See all papers in Proc. ACL 2014 that mention phrase pairs.

See all papers in Proc. ACL that mention phrase pairs.

Back to top.

Randomly sample

Appears in 4 sentences as: Randomly sample (2) randomly sample (1) randomly sampled (1)
In Toward Future Scenario Generation: Extracting Event Causality Exploiting Semantic Relation, Context, and Association Features
  1. For the test data, we randomly sampled 23,650 examples of (event causality candidate, original sentence) among which 3,645 were positive from 2,451,254 event causality candidates extracted from our web corpus (Section 3.1).
    Page 6, “Experiments”
  2. Note that, for the diversity of the sampled scenarios, our sampling proceeded as follows: (i) Randomly sample a beginning event phrase from the generated scenarios.
    Page 8, “Experiments”
  3. (ii) Randomly sample an effect phrase for the beginning event phrase from the scenarios.
    Page 8, “Experiments”
  4. (iii) Regarding the effect phrase as a cause phrase, randomly sample an effect phrase for it, and repeat (iii) up to the specified number of steps (2 or 3).
    Page 8, “Experiments”

See all papers in Proc. ACL 2014 that mention Randomly sample.

See all papers in Proc. ACL that mention Randomly sample.

Back to top.

manual annotation

Appears in 3 sentences as: manual annotation (2) manually annotating (1)
In Toward Future Scenario Generation: Extracting Event Causality Exploiting Semantic Relation, Context, and Association Features
  1. To make event causality self-contained, we wrote guidelines for manually annotating train-ing/development/test data.
    Page 2, “Introduction”
  2. We acquired 43,697 excitation templates by Hashimoto et al.’s method and the manual annotation of excitation template candidates.5 We applied the excitation filter to all 272,025,401 event causality candidates from the web and 132,528,706 remained.
    Page 3, “Event Causality Extraction Method”
  3. Note that some event causality candidates were not given excitation values for their templates, since some templates were acquired by manual annotation without Hashimoto et al.’s method.
    Page 7, “Experiments”

See all papers in Proc. ACL 2014 that mention manual annotation.

See all papers in Proc. ACL that mention manual annotation.

Back to top.

SVM

Appears in 3 sentences as: SVM (6)
In Toward Future Scenario Generation: Extracting Event Causality Exploiting Semantic Relation, Context, and Association Features
  1. An event causality candidate is given a causality score 0 8 core, which is the SVM score (distance from the hyperplane) that is normalized to [0,1] by the sigmoid function Each event causality candidate may be given multiple original sentences, since a phrase pair can appear in multiple sentences, in which case it is given more than one SVM score.
    Page 5, “Event Causality Extraction Method”
  2. (2011): CEAWS is an unsupervised method that uses CEA to rank event causality candidates, and CEAsup is a supervised method using SVM and the CEA features, whose ranking is based on the SVM scores.
    Page 7, “Experiments”
  3. The baselines are as follows: Csuns is an unsupervised method that uses 03 for ranking, and Cssup is a supervised method using SVM with 03 as the only feature that uses SVM scores for ranking.
    Page 7, “Experiments”

See all papers in Proc. ACL 2014 that mention SVM.

See all papers in Proc. ACL that mention SVM.

Back to top.