Index of papers in Proc. ACL 2009 that mention
  • feature set
Oh, Jong-Hoon and Uchimoto, Kiyotaka and Torisawa, Kentaro
Acquisition of Hyponymy Relations from Wikipedia
(2008) but LFl—LF5 and SFl-SFQ are the same as their feature set .
Acquisition of Hyponymy Relations from Wikipedia
Let us provide an overview of the feature sets used in Sumida et al.
Acquisition of Hyponymy Relations from Wikipedia
These are the feature sets used in Sumida et al.
Motivation
Since the learning settings ( feature sets , feature values, training data, corpora, and so on) are usually different in two languages, the reliable part in one language may be overlapped by an unreliable part in another language.
feature set is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Druck, Gregory and Mann, Gideon and McCallum, Andrew
Experimental Comparison with Unsupervised Learning
With this feature set , the CRF model is less expressive than DMV.
Experimental Comparison with Unsupervised Learning
The CRF cannot consider valency even with the full feature set , but this is balanced by the ability to use distance.
Experimental Comparison with Unsupervised Learning
First we note that GE training using the full feature set substantially outperforms the restricted feature set , despite the fact that the same set of constraints is used for both experiments.
feature set is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Iida, Ryu and Inui, Kentaro and Matsumoto, Yuji
Machine learning-based cache model
Therefore, the intra-sentential and inter-sentential zero-anaphora resolution models are separately trained by exploiting different feature sets as shown in Table 2.
Machine learning-based cache model
Table 1: Feature set used in the cache models
Machine learning-based cache model
The feature set used in the cache model is shown in Table l. The ‘CASE_MARKER’ feature roughly captures the salience of the local transition dealt with in Centering Theory, and is also intended to capture the global foci of a text coupled with the BEGINNING feature.
feature set is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Pado, Sebastian and Galley, Michel and Jurafsky, Dan and Manning, Christopher D.
Conclusion and Outlook
Conceputalizing MT evaluation as an entailment problem motivates the use of a rich feature set that covers, unlike almost all earlier metrics, a wide range of linguistic levels, including lexical, syntactic, and compositional phenomena.
Expt. 2: Predicting Pairwise Preferences
Feature set Consis- System-level tency (%) correlation (p)
Introduction
(2005)), and thus predict the quality of MT hypotheses with a rich RTE feature set .
Regression-based MT Quality Prediction
(2007) train binary classifiers on a feature set formed by a number of MT metrics.
feature set is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhang, Yi and Wang, Rui
Dependency Parsing with HPSG
Therefore, we extend this feature set by adding four more feature categories, which are similar to the original ones, but the dependency relation was replaced by the dependency backbone of the HP S G outputs.
Dependency Parsing with HPSG
The extended feature set is shown in Table 1.
Dependency Parsing with HPSG
The extended feature set is shown in Table 2 (the new features are listed separately).
feature set is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhao, Hai and Song, Yan and Kit, Chunyu and Zhou, Guodong
Dependency Parsing: Baseline
With notations defined in Table l, a feature set as shown in Table 2 is adopted.
Dependency Parsing: Baseline
We used a large scale feature selection approach as in (Zhao et al., 2009) to obtain the feature set in Table 2.
Evaluation Results
The results with different feature sets are in Table 4.
Evaluation Results
Table 4: The results with different feature sets features with p without p
feature set is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Galley, Michel and Manning, Christopher D.
Dependency parsing for machine translation
The three feature sets that were used in our experiments are shown in Table 2.
Dependency parsing for machine translation
It is quite similar to the McDonald (2005a) feature set , except that it does not include the set of all POS tags that appear between each candidate head-modifier pair (i , j).
Dependency parsing for machine translation
The primary difference between our feature sets and the ones of McDonald et a1.
feature set is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Garera, Nikesh and Yarowsky, David
Corpus Details
However, stopwords were retained in the feature set as various sociolinguistic studies have shown that use of some of the stopwords, for instance, pronouns and determin-ers, are correlated with age and gender.
Corpus Details
Also, only the ngrams with frequency greater than 5 were retained in the feature set following Boulis and Ostendorf (2005).
Related Work
Another relevant line of work has been on the blog domain, using a bag of words feature set to discriminate age and gender (Schler et al., 2006; Burger and Henderson, 2006; Nowson and Oberlander, 2006).
feature set is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Huang, Jian and Taylor, Sarah M. and Smith, Jonathan L. and Fotiadis, Konstantinos A. and Giles, C. Lee
Conclusions
Future research directions include developing rich feature sets and using corpus level or external information.
Experiments
Since different feature sets , NLP tools, etc are used in different benchmarked systems, we are also interested in comparing the proposed algorithm with different soft relational clustering variants.
Experiments
With the same feature sets and distance function, KARC-S outperforms FRC in F score by about 5%.
feature set is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Mintz, Mike and Bills, Steven and Snow, Rion and Jurafsky, Daniel
Discussion
The held-out results in Figure 2 suggest that the combination of syntactic and lexical features provides better performance than either feature set on its own.
Evaluation
At most recall levels, the combination of syntactic and lexical features offers a substantial improvement in precision over either of these feature sets on its own.
Evaluation
No feature set strongly outperforms any of the others across all relations.
feature set is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: