Index of papers in Proc. ACL 2014 that mention
  • hypernym
Flati, Tiziano and Vannella, Daniele and Pasini, Tommaso and Navigli, Roberto
Introduction
However, unlike the case with smaller manually-curated resources such as WordNet (Fellbaum, 1998), in many large automatically-created resources the taxonomical information is either missing, mixed across resources, e.g., linking Wikipedia categories to WordNet synsets as in YAGO, or coarse-grained, as in DBpedia whose hypernyms link to a small upper taxonomy.
Phase 1: Inducing the Page Taxonomy
For each p E P our aim is to identify the most suitable generalization ph E P so that we can create the edge (p, ph) and add it to E. For instance, given the page APPLE, which represents the fruit meaning of apple, we want to determine that its hypemym is FRUIT and add the hypernym edge connecting the two pages (i.e., E := E U {(APPLE, FRUIT)}).
Phase 1: Inducing the Page Taxonomy
3.1 Syntactic step: hypernym extraction
Phase 1: Inducing the Page Taxonomy
In the syntactic step, for each page p E P, we extract zero, one or more hypernym lemmas, that is, we output potentially ambiguous hypernyms for the page.
WiBi: A Wikipedia Bitaxonomy
Creation of the initial page taxonomy: we first create a taxonomy for the Wikipedia pages by parsing textual definitions, extracting the hypernym (s) and disambiguating them according to the page inventory.
WiBi: A Wikipedia Bitaxonomy
At each iteration, the links in the page taxonomy are used to identify category hypemyms and, conversely, the new category hypernyms are used to identify more page hypernyms .
hypernym is mentioned in 79 sentences in this paper.
Topics mentioned in this paper:
Fu, Ruiji and Guo, Jiang and Qin, Bing and Che, Wanxiang and Wang, Haifeng and Liu, Ting
Abstract
We identify whether a candidate word pair has hypernym—hyponym relation by using the word-embedding-based semantic projections between words and their hypernyms .
Background
The pioneer work by Hearst (1992) has found out that linking two noun phrases (NPs) via certain lexical constructions often implies hypernym relations.
Background
For example, NP1 is a hypernym of NP2 in the lexical pattern “such NP1 as NP2 Snow et al.
Introduction
Here, “canine” is called a hypernym of “dog.” Conversely, “dog” is a hyponym of “canine.” As key sources of knowledge, semantic thesauri and ontologies can support many natural language processing applications.
Introduction
(2013) propose a distant supervision method to extract hypernyms for entities from multiple sources.
Introduction
The output of their model is a list of hypernyms for a given enity (left panel, Figure 1).
hypernym is mentioned in 41 sentences in this paper.
Topics mentioned in this paper:
Litkowski, Ken
Assessment of Lexical Resources
This includes the WordNet lexicographer’s file name (e.g., noun.time), synsets, and hypernyms .
Assessment of Lexical Resources
We make extensive use of the file name, but less so from the synsets and hypernyms .
Assessment of Lexical Resources
However, in general, we find that the file names are too coarse-grained and the synsets and hypernyms too fine-grained for generalizations on the selectors for the complements and the governors.
See http://clg.wlv.ac.uk/proiects/DVC
The feature extraction rules are (1) word class (we), (2) part of speech (pos), (3) lemma (1), (4) word (w), (5) WordNet lexical name (In), (6) WordNet synonyms (s), (7) WordNet hypernyms (h), (8) whether the word is capitalized (c), and (9) affixes (af).
See http://clg.wlv.ac.uk/proiects/DVC
For features such as the WordNet lexical name, synonyms and hypernyms , the number of values may be much larger.
hypernym is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Bansal, Mohit and Burkett, David and de Melo, Gerard and Klein, Dan
Abstract
We present a structured learning approach to inducing hypernym taxonomies using a probabilistic graphical model formulation.
Experiments
Comparison setup: We also compare our method (as closely as possible) with related previous work by testing on the much larger animal subtree made available by Kozareva and Hovy (2010), who created this dataset by selecting a set of ‘harvested’ terms and retrieving all the WordNet hypernyms between each input term and the root (i.e., animal), resulting in N700 terms and ~4,300 isa ancestor-child links.12 Our training set for this animal test case was generated from WordNet using the following process: First, we strictly remove the full animal subtree from WordNet in order to avoid any possible overlap with the test data.
Experiments
(2011) and see a small gain in F1, but regardless, we should note that their results are incomparable (denoted by *‘k in Table 2) because they have a different ground-truth data condition: their definition and hypernym extraction phase involves using the Google ole fine keyword, which often returns WordNet glosses itself.
Introduction
Our model takes a loglinear form and is represented using a factor graph that includes both lst—order scoring factors on directed hypernymy edges (a parent and child in the taxonomy) and 2nd-order scoring factors on sibling edge pairs (pairs of hypernym edges with a shared parent), as well as incorporating a global (directed spanning tree) structural constraint.
hypernym is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: