Index of papers in Proc. ACL 2008 that mention
  • feature space
Li, Jianguo and Brew, Chris
Abstract
In this work, we develop and evaluate a wide range of feature spaces for deriving Levin—style verb classifications (Levin, 1993).
Abstract
We perform the classification experiments using Bayesian Multinomial Regression (an efficient log—linear modeling framework which we found to outperform SVMs for this task) with the proposed feature spaces .
Experiment Setup 4.1 Corpus
Since one of our primary goals is to identify a general feature space that is not specific to any class distinctions, it is of great importance to understand how the classification accuracy is affected when attempting to classify more verbs into a larger number of classes.
Integration of Syntactic and Lexical Information
ACO features integrate at least some degree of syntactic information into the feature space .
Related Work
They define a general feature space that is supposed to be applicable to all Levin classes.
Related Work
(2007) demonstrates that the general feature space they devise achieves a rate of error reduction ranging from 48% to 88% over a chance baseline accuracy, across classification tasks of varying difficulty.
Related Work
However, they also show that their general feature space does not generally improve the classification accuracy over subcategorization frames (see table 1).
Results and Discussion
Another feature set that combines syntactic and lexical information, ACO, which keeps function words in the feature space to preserve syntactic information, outperforms the conventional CO on the majority of tasks.
feature space is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Arnold, Andrew and Nallapati, Ramesh and Cohen, William W.
Abstract
We present a novel hierarchical prior structure for supervised transfer learning in named entity recognition, motivated by the common structure of feature spaces for this task across natural language data sets.
Introduction
In particular, we develop a novel prior for named entity recognition that exploits the hierarchical feature space often found in natural language domains (§l.2) and allows for the transfer of information from labeled datasets in other domains (§l.3).
Introduction
Representing feature spaces with this kind of tree, besides often coinciding with the explicit language used by common natural language toolkits (Cohen, 2004), has the added benefit of allowing a model to easily back-off, or smooth, to decreasing levels of specificity.
feature space is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: