Index of papers in Proc. ACL 2008 that mention
  • feature vector
Haghighi, Aria and Liang, Percy and Berg-Kirkpatrick, Taylor and Klein, Dan
Bilingual Lexicon Induction
Then, for each matched pair of word types (2', j ) E m, we need to generate the observed feature vectors of the source and target word types, fs(si) 6 Rd5 and fT(tj) E RdT.
Bilingual Lexicon Induction
The feature vector of each word type is computed from the appropriate monolingual corpus and summarizes the word’s monolingual characteristics; see section 5 for details and figure 2 for an illustration.
Bilingual Lexicon Induction
Specifically, to generate the feature vectors , we first generate a random concept 2M N N(0, Id), where Id is the d x d identity matrix.
Features
For a concrete example of a word type to feature vector mapping, see figure 2.
Inference
4Since ds and dT can be quite large in practice and often greater than lml, we use Cholesky decomposition to re-represent the feature vectors as lml-dimensional vectors with the same dot products, which is all that CCA depends on.
Introduction
In our method, we represent each language as a monolingual lexicon (see figure 2): a list of word types characterized by monolingual feature vectors , such as context counts, orthographic substrings, and so on (section 5).
feature vector is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Nivre, Joakim and McDonald, Ryan
Integrated Models
by a k-dimensional feature vector f : X —> R’“.
Integrated Models
In the feature-based integration we simply extend the feature vector for one model, called the base model, with a certain number of features generated by the other model, which we call the guide model in this context.
Integrated Models
The additional features will be referred to as guide features, and the version of the base model trained with the extended feature vector will be called the guided model.
feature vector is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Davidov, Dmitry and Rappoport, Ari
Relationship Classification
To do this, we construct feature vectors from each training pair, where each feature is the HITS measure corresponding to a single pattern cluster.
Relationship Classification
Once we have feature vectors , we can use a variety of classifiers (we used those in Weka) to construct a model and to evaluate it on the test set.
Relationship Classification
If we are not given any training set, it is still possible to separate between different relationship types by grouping the feature vectors of Section 4.3.2 into clusters.
feature vector is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Yang, Xiaofeng and Su, Jian and Lang, Jun and Tan, Chew Lim and Liu, Ting and Li, Sheng
Introduction
It is impractical to enumerate all the mentions in an entity and record their information in a single feature vector , as it would make the feature space too large.
Introduction
Even worse, the number of mentions in an entity is not fixed, which would result in variant-length feature vectors and make trouble for normal machine learning algorithms.
Modelling Coreference Resolution
As an entity may contain more than one candidate and the number is not fixed, it is impractical to enumerate all the mentions in an entity and put their properties into a single feature vector .
Related Work
In the system, a training or testing instance is formed for two mentions in question, with a feature vector describing their properties and relationships.
feature vector is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Bergsma, Shane and Lin, Dekang and Goebel, Randy
Evaluation
We represent feature vectors exactly as described in Section 3.3.
Methodology
3.3 Feature Vector Representation
Methodology
We can achieve these aims by ordering the counts in a feature vector , and using a labelled set of training examples to learn a classifier that optimally weights the counts.
feature vector is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Feng, Yansong and Lapata, Mirella
BBC News Database
Secondly, the generation of feature vectors is modeled directly, so there is no need for quantization.
BBC News Database
where NV] is the number of regions in image I , vr the feature vector for region r in image I , nsv the number of regions in the image of latent variable 5, v,- the feature vector for region i in 5’s image, k the dimension of the image feature vectors and Z the feature covariance matrix.
BBC News Database
According to equation (3), a Gaussian kernel is fit to every feature vector v,- corresponding to region i in the image of the latent variable 5.
feature vector is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Nakov, Preslav and Hearst, Marti A.
Relational Similarity Experiments
Given a verbal analogy example, we build six feature vectors — one for each of the six word pairs.
Relational Similarity Experiments
For the evaluation, we created a feature vector for each head-modifier pair, and we performed a leave-one-out cross-validation: we left one example for testing and we trained on the remaining 599 ones, repeating this procedure 600 times so that each example be used for testing.
Relational Similarity Experiments
We calculated the similarity between the feature vector of the testing example and each of the training examples’ vectors.
feature vector is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: