Learning Word-Class Lattices for Definition and Hypernym Extraction
Navigli, Roberto and Velardi, Paola

Article Structure

Abstract

Definition extraction is the task of automatically identifying definitional sentences within texts.

Introduction

Textual definitions constitute a fundamental source to look up when the meaning of a term is sought.

Related Work

Definition Extraction.

Word-Class Lattices

3.1 Preliminaries

Example

As an example, consider the definitions in Table 1.

Experiments

5.1 Experimental Setup

Conclusions

In this paper, we have presented a lattice-based approach to definition and hypernym extraction.

Topics

hypernym

Appears in 35 sentences as: Hypernym (2) hypernym (26) hypernyms (13) “hypernyms” (1)
In Learning Word-Class Lattices for Definition and Hypernym Extraction
  1. Our method is applied to the task of definition and hypernym extraction and compares favorably to other pattern generalization methods proposed in the literature.
    Page 1, “Abstract”
  2. A key feature of our approach is its inherent ability to both identify definitions and extract hypernyms .
    Page 2, “Introduction”
  3. WCLs are shown to generalize over lexico-syntactic patterns, and outperform well-known approaches to definition and hypernym extraction.
    Page 2, “Introduction”
  4. Hypernym Extraction.
    Page 3, “Related Work”
  5. The literature on hypernym extraction offers a higher variability of methods, from simple lexical patterns (Hearst, 1992; Oakes, 2005) to statistical and machine learning techniques (Agirre et al., 2000; Cara-ballo, 1999; Dolan et al., 1993; Sanfilippo and Poznanski, 1992; Ritter et al., 2009).
    Page 3, “Related Work”
  6. Finally, they train a hypernym clas-sifer based on these features.
    Page 3, “Related Work”
  7. Lexico-syntactic patterns are generated for each sentence relating a term to its hypernym , and a dependency parser is used to represent them.
    Page 3, “Related Work”
  8. o The DEFINIENS field (GF): it includes the genus phrase (usually including the hypernym , e.g., “a first-class function”);
    Page 3, “Word-Class Lattices”
  9. For each sentence, the definiendum (that is, the word being defined) and its hypernym are marked in bold and italic, respectively.
    Page 3, “Word-Class Lattices”
  10. Furthermore, in the final lattice, nodes associated with the hypernym words in the learning sentences are marked as hypernyms in order to be able to determine the hypernym of a test sentence at classification time.
    Page 5, “Word-Class Lattices”
  11. Finally, when a sentence is classified as a definition, its hypemym is extracted by selecting the words in the input sentence that are marked as “hypernyms” in the WCL-l lattice (or in the WCL-3 GF lattice).
    Page 5, “Word-Class Lattices”

See all papers in Proc. ACL 2010 that mention hypernym.

See all papers in Proc. ACL that mention hypernym.

Back to top.

Bigrams

Appears in 7 sentences as: bigram (3) Bigrams (4) bigrams (1)
In Learning Word-Class Lattices for Definition and Hypernym Extraction
  1. 0 Bigrams: an implementation of the bigram classifier for soft pattern matching proposed by Cui et al.
    Page 6, “Experiments”
  2. The probability is calculated as a mixture of bigram and
    Page 6, “Experiments”
  3. WCL—l 99.88 42.09 59.22 76.06 WCL—3 98.81 60.74 75.23 83.48 Star patterns 86.74 66.14 75.05 81.84 Bigrams 66.70 82.70 73.84 75.80
    Page 7, “Experiments”
  4. Section 2), their results do not show significant improvements over the bigram language model.
    Page 7, “Experiments”
  5. WCL-l 98.33 39.39 WCL-3 94.87 56.57 Star patterns 44.01 63.63 Bigrams 46.60 45 .45
    Page 7, “Experiments”
  6. As expected, bigrams and star patterns exhibit a higher recall (82% and 66%, respectively).
    Page 7, “Experiments”
  7. Bigrams achieve even lower performance, namely 46.60% precision, 45.45% recall.
    Page 8, “Experiments”

See all papers in Proc. ACL 2010 that mention Bigrams.

See all papers in Proc. ACL that mention Bigrams.

Back to top.

question answering

Appears in 4 sentences as: Question Answering (1) question answering (3)
In Learning Word-Class Lattices for Definition and Hypernym Extraction
  1. The task has proven useful in many research areas including ontology learning, relation extraction and question answering .
    Page 1, “Abstract”
  2. Definitions are also harvested in Question Answering to deal with “what is” questions (Cui et al., 2007; Saggion, 2004).
    Page 1, “Introduction”
  3. (2007) propose the use of probabilistic lexico-semantic patterns, called soft patterns, for definitional question answering in the TREC contestl.
    Page 2, “Related Work”
  4. Thanks to its generalization power, this method is the most closely related to our work, however the task of definitional question answering to which it is applied is slightly different from that of definition extraction, so a direct performance comparison is not possi-
    Page 2, “Related Work”

See all papers in Proc. ACL 2010 that mention question answering.

See all papers in Proc. ACL that mention question answering.

Back to top.