Index of papers in Proc. ACL 2011 that mention
  • WordNet
Khapra, Mitesh M. and Joshi, Salil and Chatterjee, Arindam and Bhattacharyya, Pushpak
Experimental Setup
degree of Wordnet polysemy for polysemous words
Experimental Setup
Table 4: Average degree of Wordnet polysemy per category in the 2 domains for Hindi
Experimental Setup
degree of Wordnet polysemy for polysemous words
Introduction
This is achieved with the help of a novel synset-aligned multilingual dictionary which facilitates the projection of parameters learned from the Wordnet and annotated corpus of L1 to L2.
Parameter Projection
Wordnet-dependent parameters depend on the structure of the Wordnet whereas the Corpus-dependent parameters depend on various statistics learned from a sense marked corpora.
Parameter Projection
Both the tasks of (a) constructing a Wordnet from scratch and (b) collecting sense marked corpora for multiple languages are tedious and expensive.
Parameter Projection
(2009) observed that by projecting relations from the Wordnet of a language and by projecting corpus statistics from the sense marked corpora of the language to those of the target language, the efi‘ort required in constructing semantic graphs for multiple Wordnets and collecting sense marked corpora for multiple languages can be avoided or reduced.
Related Work
They showed that it is possible to project the parameters learned from the annotation work of one language to another language provided aligned Wordnets for the two languages are available.
Related Work
However, they do not address situations where two resource deprived languages have aligned Wordnets but neither has sufficient annotated data.
WordNet is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Veale, Tony
Creative Text Retrieval
A generic, lightweight resource like WordNet can provide these relations, or a richer ontology can be used if one is available (e.g.
Creative Text Retrieval
But ad-hoc categories do not replace natural kinds; rather, they supplement an existing system of more-or-less rigid categories, such as the categories found in WordNet .
Creative Text Retrieval
member of the category named by C. AC can denote a fixed category in a resource like WordNet or even Wikipedia; thus, Afruit matches any member of {apple, orange, pear, lemon} and Aanimal any member of {dog, cat, mouse, deer, fox}.
Empirical Evaluation
and @ as category builders to a handcrafted gold standard like WordNet .
Empirical Evaluation
Other researchers have likewise used WordNet as a gold standard for categorization experiments, and we replicate here the experimental setup of Almuhareb and Poesio (2004, 2005), which is designed to measure the effectiveness of web-acquired conceptual descriptions.
Empirical Evaluation
Almuhareb and Poesio choose 214 English nouns from 13 of WordNet’s upper-level semantic categories, and proceed to harvest property values for these concepts from the web using the Hearst-like pattern “alanlthe * C islwas”.
Related Work and Ideas
Techniques vary, from the use of stemmers and morphological analysis to the use of thesauri (such as WordNet ; see Fellbaum, 1998; Voorhees, 1998) to pad a query with synonyms, to the use of statistical analysis to identify more appropriate context-sensitive associations and near-synonyms (e.g.
Related Work and Ideas
Hearst (1992) shows how a pattern like “Xs and other Ys” can be used to construct more fluid, context-specific taxonomies than those provided by WordNet (e.g.
WordNet is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Berant, Jonathan and Dagan, Ido and Goldberger, Jacob
Background
A widely-used resource is WordNet (Fellbaum, 1998), where relations such as synonymy and hyponymy can be used to generate rules.
Experimental Evaluation
single pair of words that are WordNet antonyms (2) Predicates differing by a single word of negation (3) Predicates p(t1, t2) and p(t2, 751) where p is a transitive verb (e.g., beat) in VerbNet (Kipper-Schuler et al., 2000).
Learning Typed Entailment Graphs
Given a lexicographic resource ( WordNet ) and a set of predicates with their instances, we perform the following three steps (see Table 1):
Learning Typed Entailment Graphs
1) Training set generation We use WordNet to generate positive and negative examples, where each example is a pair of predicates.
Learning Typed Entailment Graphs
For every predicate p(t1, t2) 6 P such that p is a single word, we extract from WordNet the set S of synonyms and direct hy-pernyms of p. For every p’ E S, if p’ (t1, t2) 6 P then p(t1, 752) —> p’ (t1, 752) is taken as a positive example.
WordNet is mentioned in 6 sentences in this paper.
Topics mentioned in this paper: