Abstract | Our method is applied to the task of definition and hypernym extraction and compares favorably to other pattern generalization methods proposed in the literature. |
Introduction | A key feature of our approach is its inherent ability to both identify definitions and extract hypernyms . |
Introduction | WCLs are shown to generalize over lexico-syntactic patterns, and outperform well-known approaches to definition and hypernym extraction. |
Related Work | Hypernym Extraction. |
Related Work | The literature on hypernym extraction offers a higher variability of methods, from simple lexical patterns (Hearst, 1992; Oakes, 2005) to statistical and machine learning techniques (Agirre et al., 2000; Cara-ballo, 1999; Dolan et al., 1993; Sanfilippo and Poznanski, 1992; Ritter et al., 2009). |
Related Work | Finally, they train a hypernym clas-sifer based on these features. |
Word-Class Lattices | o The DEFINIENS field (GF): it includes the genus phrase (usually including the hypernym , e.g., “a first-class function”); |
Word-Class Lattices | For each sentence, the definiendum (that is, the word being defined) and its hypernym are marked in bold and italic, respectively. |
Word-Class Lattices | Furthermore, in the final lattice, nodes associated with the hypernym words in the learning sentences are marked as hypernyms in order to be able to determine the hypernym of a test sentence at classification time. |
Experiments | The spin model approach uses word glosses, WordNet synonym, hypernym , and antonym relations, in addition to co-occurrence statistics extracted from corpus. |
Experiments | The proposed method achieves better performance by only using WordNet synonym, hypernym and similar to relations. |
Experiments | We build a network using only WordNet synonyms and hypernyms . |
Word Polarity | For example, we can use other WordNet relations: hypernyms , similar to,...etc. |
Experimental Evaluation | Another local resource was WordNet where we inserted an edge (u, 2)) when U was a direct hypernym or synonym of u. |
Learning Entailment Graph Edges | For each 75, E T with two variables and a single predicate word 21), we extract from WordNet the set H of direct hypernyms and synonyms of 21). |
Learning Entailment Graph Edges | Negative examples are generated analogously, by looking at direct co-hyponyms of 212 instead of hypernyms and synonyms. |
Learning Entailment Graph Edges | Combined with the constraint of transitivity this implies that there must be no path from u to v. This is done in the following two scenarios: (1) When two nodes u and v are identical except for a pair of words wu and my, and mu is an antonym of my, or a hypernym of my at distance 2 2. |
Conclusions | We also intend to link missing concepts in WordNet, by establishing their most likely hypernyms — e.g., a la Snow et al. |
Methodology | 0 Hypernymy/Hyponymy: all synonyms in the synsets H such that H is either a hypernym (i.e., a generalization) or a hyponym (i.e., a specialization) of S. For example, given bal-loo n},, we include the words from its hypernym { lighter-than-air craft}, } and all its hyponyms (e.g. |
Methodology | o Sisterhood: words from the sisters of S. A sister synset S’ is such that S and 8’ have a common direct hypernym . |
Methodology | To do so, we include words from their synsets, hypernyms , hyponyms, sisters, and glosses. |
Automated Classification | 0 {Synonyms, Hypernyms } for all NN and VB entries for each word |
Automated Classification | Intersection of the words’ hypernyms |
Automated Classification | In fact, by themselves they proved roughly as useful as the hypernym features, and their removal had the single strongest negative impact on accuracy for our dataset. |