Efficient Search for Transformation-based Inference
Stern, Asher and Stern, Roni and Dagan, Ido and Felner, Ariel

Article Structure

Abstract

This paper addresses the search problem in textual inference, where systems need to infer one piece of text from another.

Introduction

In many NLP settings it is necessary to identify that a certain semantic inference relation holds between two pieces of text.

Background

Applying sequences of transformations to recognize textual inference was suggested by several works.

Search for Textual Inference

In this section we formalize our search problem and specify a unifying search scheme by which we test several search algorithms in a systematic manner.

Evaluation

In this section we first evaluate the search performance in terms of efficiency (run time), the quality

Conclusion

In this paper we investigated the efficiency and proof quality obtained by various search algorithms.

Topics

parse tree

Appears in 6 sentences as: parse tree (6) parse trees (1)
In Efficient Search for Transformation-based Inference
  1. We focus on methods that perform transformations over parse trees , and highlight the search challenge with which they are faced.
    Page 2, “Background”
  2. In our domain, each state is a parse tree , which is expanded by performing all applicable transformations.
    Page 3, “Background”
  3. Let t be a parse tree , and let 0 be a transformation.
    Page 4, “Search for Textual Inference”
  4. Denoting by tT and tH the text parse tree and the hypothesis parse tree , a proof system has to find a sequence 0 with minimal cost such that tT lO m. This forms a search problem of finding the lowest-cost proof among all possible proofs.
    Page 4, “Search for Textual Inference”
  5. Next, for a transformation 0, applied on a parse tree If, we define arequiredfi, 0) as the subset of 75’s nodes required for applying 0 (i.e., in the absence of these nodes, 0 could not be applied).
    Page 6, “Search for Textual Inference”
  6. The recursive procedure described in Algorithm 2 generates all coherent subsequences of lengths up to d. It should be initially invoked with t - the current state ( parse tree ) being expanded, 0 - the set of all its nodes, d - the maximal required length, and (D as an empty initial sequence.
    Page 6, “Search for Textual Inference”

See all papers in Proc. ACL 2012 that mention parse tree.

See all papers in Proc. ACL that mention parse tree.

Back to top.

Beam search

Appears in 5 sentences as: Beam search (3) beam search (2)
In Efficient Search for Transformation-based Inference
  1. Beam search (Furcy and Koenig, 2005; Zhou and Hansen, 2005) limits the number of states stored in OPEN, while Greedy search limits OPEN to contain only the single best state generated in the current iteration.
    Page 3, “Background”
  2. In the earlier version of BIUTEE (Stern and Dagan, 2011)2, a version of beam search was incorporated, named hereafter BIUTEE-orig.
    Page 4, “Background”
  3. For beam search we used 1:: = 150, since higher val-
    Page 7, “Evaluation”
  4. WA* 7.85 / 8.53 9797 / 9979 1.05/ 10.28 Greedy 0.20 / 0.10 468 / 158 1.10/ 10.55 Pure heuristic 0.09 / 0.10 123/ 167 1.35/ 12.51 Beam search 2053/ 9.48 43925 / 18992 1.08/ 10.52 BIUTEE-orig 7.86/ 14.61 14749 / 22795 1.03/10.28 LLGS 2.76/ 1.72 1722/ 842 0.95/ 10.14
    Page 8, “Evaluation”
  5. Algorithm RTE—5 accuracy % RTE—6 F1 % Weighted A* 59.50 48.20 Dovetailing WA* 60.83 49.01 Greedy 60.50 48.56 Pure heuristic 60.83 45.70 Beam search 61.33 48.58 BIUTEE-orig 60.67 49.25 LLGS 64.00 49.09
    Page 8, “Evaluation”

See all papers in Proc. ACL 2012 that mention Beam search.

See all papers in Proc. ACL that mention Beam search.

Back to top.