Index of papers in Proc. ACL 2010 that mention
  • feature space
Sun, Jun and Zhang, Min and Tan, Chew Lim
Bilingual Tree Kernels
In order to compute the dot product of the feature vectors in the exponentially high dimensional feature space , we introduce the tree kernel functions as follows:
Bilingual Tree Kernels
As a result, we propose the dependent Bilingual Tree kernel (dBTK) to jointly evaluate the similarity across subtree pairs by enlarging the feature space to the Cartesian product of the two substructure sets.
Bilingual Tree Kernels
Here we verify the correctness of the kernel by directly constructing the feature space for the inner product.
Introduction
Both kernels can be utilized within different feature spaces using various representations of the substructures.
Substructure Spaces for BTKs
Given feature spaces defined in the last two sections, we propose a 2-phase subtree alignment model as follows:
Substructure Spaces for BTKs
Feature Space P R F
Substructure Spaces for BTKs
Feature Space P R F
feature space is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Tomasoni, Mattia and Huang, Minlie
Discussion and Future Directions
The Quality assessing component itself could be built as a module that can be adjusted to the kind of Social Media in use; the creation of customized Quality feature spaces would make it possible to handle different sources of UGC (forums, collaborative authoring websites such as Wikipedia, blogs etc.).
Discussion and Future Directions
A great obstacle is the lack of systematically available high quality training examples: a tentative solution could be to make use of clustering algorithms in the feature space ; high and low quality clusters could then be labeled by comparison with examples of virtuous behavior (such as Wikipedia’s Featured Articles).
Experiments
To demonstrate it, we conducted a set of experiments on the original unfiltered dataset to establish whether the feature space \11 was powerful enough to capture the quality of answers; our specific objective was to estimate the
Related Work
(2008) which inspired us in the design of the Quality feature space presented in Section 2.1.
The summarization framework
feature space to capture the following syntactic, behavioral and statistical properties:
The summarization framework
The features mentioned above determined a space \II; An answer a, in such feature space , assumed the vectorial form:
feature space is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Croce, Danilo and Giannone, Cristina and Annesi, Paolo and Basili, Roberto
Abstract
The resulting argument classification model promotes a simpler feature space that limits the potential overfitting effects.
Introduction
The model adopts a simple feature space by relying on a limited set of grammatical properties, thus reducing its learning capacity.
Introduction
As we will see, the accuracy reachable through a restricted feature space is still quite close to the state-of-art, but interestingly the performance drops in out-of-domain tests are avoided.
feature space is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Prettenhofer, Peter and Stein, Benno
Cross-Language Structural Correspondence Learning
MASK(x, pl) is a function that returns a copy of x where the components associated with the two words in p; are set to zero—which is equivalent to removing these words from the feature space .
Cross-Language Structural Correspondence Learning
Since (6Tv)T = VT6 it follows that this view of CL-SCL corresponds to the induction of a new feature space given by Equation 2.
Cross-Language Text Classification
Le, documents from the training set and the test set map on two non-overlapping regions of the feature space .
feature space is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: