Index of papers in Proc. ACL 2009 that mention
  • distributional similarity
McIntosh, Tara and Curran, James R.
Abstract
We propose an integrated distributional similarity filter to identify and censor potential semantic drifts, ensuring over 10% higher precision when extracting large semantic lexicons.
Background
2.2 Distributional Similarity
Background
Distributional similarity has been used to extract semantic lexicons (Grefenstette, 1994), based on the distributional hypothesis that semantically similar words appear in similar contexts (Harris, 1954).
Background
(2006) used 11 patterns, and the distributional similarity score of each pair of terms, to construct features for lexical entailment.
Conclusion
In this paper, we have proposed unsupervised bagging and integrated distributional similarity to minimise the problem of semantic drift in iterative bootstrapping algorithms, particularly when extracting large semantic lexicons.
Detecting semantic drift
In this section, we propose distributional similarity measurements over the extracted lexicon to detect semantic drift during the bootstrapping process.
Detecting semantic drift
We calculate the average distributional similarity (Sim) of t with all terms in L1,”, and those in L( N_m)m N and call the ratio the drift for term t:
Detecting semantic drift
For calculating drift we use the distributional similarity approach described in Curran (2004).
Introduction
We integrate a distributional similarity filter directly into WMEB (McIntosh and Curran, 2008).
Introduction
Our distributional similarity filter gives a similar performance improvement.
distributional similarity is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Zapirain, Beñat and Agirre, Eneko and Màrquez, Llu'is
Abstract
The best results are obtained with a novel second-order distributional similarity measure, and the positive effect is specially relevant for out-of-domain data.
Related Work
Distributional similarity has also been used to tackle syntactic ambiguity.
Related Work
Pantel and Lin (2000) obtained very good results using the distributional similarity measure defined by Lin (1998).
Related Work
The results over 100 frame-specific roles showed that distributional similarities get smaller error rates than Resnik and EM, with Lin’s formula having the smallest error rate.
Results and Discussion
Regarding the selectional preference variants, WordNet—based and first-order distributional similarity models attain similar levels of precision, but the former are clearly worse on recall and F1.
Results and Discussion
The second-order distributional similarity measures perform best overall, both in precision and recall.
Results and Discussion
Regarding the similarity metrics, the cosine seems to perform consistently better for first-order distributional similarity , while J accard provided slightly better results for second-order similarity.
Selectional Preference Models
Distributional SP models: Given the availability of publicly available resources for distributional similarity , we used 1) a ready-made thesaurus (Lin, 1998), and 2) software (Pado and Lapata, 2007) which we run on the British National Corpus (BNC).
distributional similarity is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Kotlerman, Lili and Dagan, Ido and Szpektor, Idan and Zhitomirsky-Geffet, Maayan
Background
To date, most distributional similarity research concentrated on symmetric measures, such as the widely cited and competitive (as shown in (Weeds and Weir, 2003)) LIN measure (Lin, 1998):
Evaluation and Results
In this setting, category names were taken as seeds and expanded by distributional similarity , further measuring cosine similarity with categorized documents similarly to IR query expansion.
Introduction
Much work on automatic identification of semantically similar terms exploits Distributional Similarity , assuming that such terms appear in similar contexts.
Introduction
This paper is motivated by one of the prominent applications of distributional similarity , namely identifying lexical expansions.
Introduction
Often, distributional similarity measures are used to identify expanding terms (e.g.
distributional similarity is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Korkontzelos, Ioannis and Manandhar, Suresh
Introduction and related work
In this paper, we propose a novel unsupervised approach that compares the major senses of a MWE and its semantic head using distributional similarity measures to test the compositionality of the MWE.
Proposed approach
We used two techniques to measure the distributional similarity of major uses of the M WE and its semantic head, both based on Jaccard coefi‘icient (J).
Proposed approach
Given the major uses of a MWE and its semantic head, the MWE is considered as compositional, when the corresponding distributional similarity measure (Jc or 197,) value is above a parameter threshold, sim.
Unsupervised parameter tuning
The best performing distributional similarity measure is an.
distributional similarity is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Biemann, Chris and Choudhury, Monojit and Mukherjee, Animesh
Abstract
We study the global topology of the syntactic and semantic distributional similarity networks for English through the technique of spectral analysis.
Introduction
An alternative, but equally popular, visualization of distributional similarity is through graphs or networks, where each word is represented as nodes and weighted edges indicate the extent of distributional similarity between them.
Introduction
intriguing question, whereby we construct the syntactic and semantic distributional similarity network (DSN) and analyze their spectrum to understand their global topology.
distributional similarity is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: