Index of papers in Proc. ACL 2010 that mention
  • vector space
Chen, Boxing and Foster, George and Kuhn, Roland
Abstract
The sense similarity scores are computed by using the vector space model.
Conclusions and Future Work
In this paper, we have proposed an approach that uses the vector space model to compute the sense
Introduction
Given two terms to be compared, one first extracts various features for each term from their contexts in a corpus and forms a vector space model (VSM); then, one computes their similarity by using similarity functions.
Introduction
Use of the vector space model to compute sense similarity has also been adapted to the multilingual condition, based on the assumption that two terms with similar meanings often occur in comparable contexts across languages.
Similarity Functions
4.2 Vector Space Mapping
Similarity Functions
A common way to calculate semantic similarity is by vector space cosine distance; we will also
Similarity Functions
Fung (1998) and Rapp (1999) map the vector one-dimension-to-one-dimension (a context word is a dimension in each vector space ) from one language to another language via an initial bilingual dictionary.
vector space is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Jurgens, David and Stevens, Keith
Benchmarks
’LU2 is the kth most-similar word to ml in the vector space .
The S-Space Framework
Document-based models divide a corpus into discrete documents and construct the vector space from word frequencies in the documents.
The S-Space Framework
Co-occurrence models build the vector space using the distribution of co-occurring words in a context, which is typically defined as a region around a word or paths rooted in a parse tree.
The S-Space Framework
WSI models also use co-occurrence but also attempt to discover distinct word senses while building the vector space .
Word Space Models
Figure 1 illustrates the shared algorithmic structure of all the approaches, which is divided into four components: corpus processing, context selection, feature extraction and global vector space operations.
Word Space Models
Feature extraction determines the dimensions of the vector space by selecting which tokens in the context will count as features.
Word Space Models
Global vector space operations are applied to the entire space once the initial word features have been computed.
vector space is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Thater, Stefan and Fürstenau, Hagen and Pinkal, Manfred
Experiments: Ranking Paraphrases
To compute the vector space , we consider only a subset of the complete set of dependency triples extracted from the parsed Gigaword corpus.
Introduction
We go one step further, however, in that we employ syntactically enriched vector models as the basic meaning representations, assuming a vector space spanned by combinations of dependency relations and words (Lin, 1998).
Introduction
For the problem at hand, the use of second-order vectors alleviates the sparseness problem, and enables the definition of vector space transformations that make the distributional information attached to words in different syntactic positions compatible.
The model
Assuming a set W of words and a set R of dependency relation labels, we consider a Euclidean vector space V1 spanned by the set of orthonormal basis vectors {Enw/ | r E R,w’ E W}, i.e., a vector space whose dimensions correspond to pairs of a relation and a word.
The model
In this vector space we define the first-order vector [w] of a word w as follows:
The model
We further consider a similarly defined vector space V2, spanned by an orthonormal basis {Em/7w, | r, r’ E R,w’ E W}.
vector space is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Rudolph, Sebastian and Giesbrecht, Eugenie
Compositionality and Matrices
A great variety of linguistic models are subsumed by this general idea ranging from purely symbolic approaches (like type systems and cate-gorial grammars) to rather statistical models (like vector space and word space models).
Introduction
In computational linguistics and information retrieval, Vector Space Models (Salton et al., 1975) and its variations — such as Word Space Models (Schutze, 1993), Hyperspace Analogue to Language (Lund and Burgess, 1996), or Latent Semantic Analysis (Deerwester et al., 1990) — have become a mainstream paradigm for text representation.
Introduction
Vector Space Models (VSMs) have been empirically justified by results from cognitive science (Gardenfors, 2000).
Related Work
Widdows (2008) proposes a number of more advanced vector operations well-known from quantum mechanics, such as tensor product and convolution, to model composition in vector spaces .
vector space is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: