Index of papers in Proc. ACL 2009 that mention
  • feature vector
Lin, Dekang and Wu, Xiaoyun
Distributed K-Means clustering
Given a set of elements represented as feature vectors and a number, k, of desired clusters, the K-Means algorithm consists of the following steps:
Distributed K-Means clustering
Before describing our parallel implementation of the K-Means algorithm, we first describe the phrases to be clusters and how their feature vectors are constructed.
Distributed K-Means clustering
Following previous approaches to distributional clustering of words, we represent the contexts of a phrase as a feature vector .
feature vector is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Kotlerman, Lili and Dagan, Ido and Szpektor, Idan and Zhitomirsky-Geffet, Maayan
A Statistical Inclusion Measure
Amongst these features, those found in 21’s feature vector are termed included features.
A Statistical Inclusion Measure
In preliminary data analysis of pairs of feature vectors , which correspond to a known set of valid and invalid expansions, we identified the following desired properties for a distributional inclusion measure.
A Statistical Inclusion Measure
In our case the feature vector of the expanded word is analogous to the set of all relevant documents while tested features correspond to retrieved documents.
Background
First, a feature vector is constructed for each word by collecting context words as features.
Background
where FVgc is the feature vector of a word cc and way (f) is the weight of the feature f in that word’s vector, set to their pointwise mutual information.
Background
Extending this rationale to the textual entailment setting, Geffet and Dagan (2005) expected that if the meaning of a word it entails that of 2) then all its prominent context features (under a certain notion of “prominence”) would be included in the feature vector of v as well.
Conclusions and Future work
This paper advocates the use of directional similarity measures for lexical expansion, and potentially for other tasks, based on distributional inclusion of feature vectors .
Evaluation and Results
Feature vectors were created by parsing the Reuters RCVl corpus and taking the words related to each term through a dependency relation as its features (coupled with the relation name and direction, as in (Lin, 1998)).
feature vector is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Mintz, Mike and Bills, Steven and Snow, Rion and Jurafsky, Daniel
Architecture
If a sentence contains two entities and those entities are an instance of one of our Freebase relations, features are extracted from that sentence and are added to the feature vector for the relation.
Architecture
In training, the features for identical tuples (relation, entityl, entity2) from different sentences are combined, creating a richer feature vector .
Architecture
This time, every pair of entities appearing together in a sentence is considered a potential relation instance, and whenever those entities appear together, features are extracted on the sentence and added to a feature vector for that entity pair.
Implementation
Towards this end, we build a feature vector in the training phase for an ‘unrelated’ relation by randomly selecting entity pairs that do not appear in any Freebase relation and extracting features for them.
Implementation
Our classifier takes as input an entity pair and a feature vector , and returns a relation name and a confidence score based on the probability of the entity pair belonging to that relation.
Introduction
For each pair of entities, we aggregate the features from the many different sentences in which that pair appeared into a single feature vector , allowing us to provide our classifier with more information, resulting in more accurate labels.
feature vector is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Pervouchine, Vladimir and Li, Haizhou and Lin, Bo
Transliteration alignment techniques
Withgott and Chen (1993) define a feature vector of phonological descriptors for English sounds.
Transliteration alignment techniques
We extend the idea by defining a 21-element binary feature vector for each English and Chinese phoneme.
Transliteration alignment techniques
Each element of the feature vector represents presence or absence of a phonological descriptor that differentiates various kinds of phonemes, e.g.
feature vector is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Chang, Kai-min K. and Cherkassky, Vladimir L. and Mitchell, Tom M. and Just, Marcel Adam
Brain Imaging Experiments on Adj ec-tive-Noun Comprehension
The regression model examined to what extent the semantic feature vectors (explanatory variables) can account for the variation in neural activity (response variable) across the 12 stimuli.
Brain Imaging Experiments on Adj ec-tive-Noun Comprehension
Table 5 also supports our hypothesis that the multiplicative model should outperform the additive model, based on the assumption that adjectives are used to emphasize particular semantic features that will already be represented in the semantic feature vector of the noun.
Brain Imaging Experiments on Adj ec-tive-Noun Comprehension
We are currently exploring the infinite latent semantic feature model (ILFM; Griffiths & Ghahramani, 2005), which assumes a nonparametric Indian Buffet prior to the binary feature vector and models neural activation with a linear Gaussian model.
feature vector is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Wu, Hua and Wang, Haifeng
Introduction
A regression learning method is used to infer a function that maps a feature vector (which measures the similarity of a translation to the pseudo references) to a score that indicates the quality of the translation.
Translation Selection
The regression objective is to infer a function that maps a feature vector (which measures the similarity of a translation from one system to the pseudo references) to a score that indicates the quality of the translation.
Translation Selection
The input sentence is represented as a feature vector X, which are extracted from the input sentence and the comparisons against the pseudo references.
feature vector is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: