Index of papers in Proc. ACL 2012 that mention
  • feature set
Zhao, Qiuye and Marcus, Mitch
Abstract
By adopting this ILP formulation, segmentation F—measure is increased from 0.968 to 0.974, as compared to Viterbi decoding with the same feature set .
Abstract
We adopt the basic feature set used in (Ratnaparkhi, 1996) and (Collins, 2002).
Abstract
As introduced in Section 2.2, we adopt a very compact feature set used in (Ratnaparkhi, l996)1.
feature set is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Lippincott, Thomas and Korhonen, Anna and Ó Séaghdha, Diarmuid
Methodology
In this section we describe the basic components of our study: feature sets , graphical model, inference, and evaluation.
Methodology
3.1 Input and feature sets
Methodology
We tested several feature sets either based on, or approximating, the concept of grammatical relation described in section 2.
Results
We evaluated SCF leXicons based on the eight feature sets described in section 3.1, as well as the VALEX SCF leXicon described in section 2.
feature set is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Simianer, Patrick and Riezler, Stefan and Dyer, Chris
Abstract
With a few exceptions, discriminative training in statistical machine translation (SMT) has been content with tuning weights for large feature sets on small development data.
Discussion
In future work, we would like to investigate more sophisticated features, better learners, and in general improve the components of our system that have been neglected in the current investigation of relative improvements by scaling the size of data and feature sets .
Experiments
The results on the news-commentary (nc) data show that training on the development set does not benefit from adding large feature sets — BLEU result differences between tuning 12 default features
Experiments
Here tuning large feature sets on the respective dev sets yields significant improvements of around 2 BLEU points over tuning the 12 default features on the dev sets.
Introduction
Our resulting models are learned on large data sets, but they are small and outperform models that tune feature sets of various sizes on small development sets.
Related Work
All approaches have been shown to scale to large feature sets and all include some kind of regularization method.
feature set is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Liu, Xiaohua and Zhou, Ming and Zhou, Xiangyang and Fu, Zhongyang and Wei, Furu
Experiments
Table 4: Overall F1 (%) of NER and Accuracy (%) of N EN with different feature sets .
Experiments
Table 4 shows the overall performance of our method with various feature set combinations, where F0, F; and F9 denote the orthographic features, the lexical features, and the gazetteer-related features, respectively.
Our Method
(2) {$21521 and {$225521 are two feature sets .
Our Method
4.3.1 Feature Set One: {$21) 5:11
Our Method
4.3.2 Feature Set Two: {$22) 5:21
feature set is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Falk, Ingrid and Gardent, Claire and Lamirel, Jean-Charles
Clustering Methods, Evaluation Metrics and Experimental Setup
Table 1: Sample output for a cluster produced with the grid-scf-sem feature set and the IGNGF clustering method.
Features and Data
Table 4(a) includes the evaluation results for all the feature sets when using IGNGF clustering.
Features and Data
In terms of features, the best results are obtained using the grid-scf-sem feature set with an F-measure of 0.70.
Features and Data
In contrast, the classification obtained using the scf-synt-sem feature set has a higher CMP for the clustering with optimal mPUR (0.57); but a lower F—measure (0.61), a larger number of classes (16)
feature set is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Hatori, Jun and Matsuzaki, Takuya and Miyao, Yusuke and Tsujii, Jun'ichi
Introduction
Second, although the feature set is fundamentally a combination of those used in previous works (Zhang and Clark, 2010; Huang and Sagae, 2010), to integrate them in a single incremental framework is not straightforward.
Model
The feature set of our model is fundamentally a combination of the features used in the state-of-the-art joint segmentation and POS tagging model (Zhang and Clark, 2010) and dependency parser (Huang and Sagae, 2010), both of which are used as baseline models in our experiment.
Model
All of the models described above except Dep’ are based on the same feature sets for segmentation and
Related Works
Zhang and Clark (2008) proposed an incremental joint segmentation and POS tagging model, with an effective feature set for Chinese.
feature set is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Chen, Wenliang and Zhang, Min and Li, Haizhou
Decoding
With the rich feature set in Table l, the running time of Intersect is longer than the time of Rescoring.
Experiments
Table 2 shows the feature settings of the systems, where MSTl/2 refers to the basic first—/second-order parser and MSTB l/2 refers to the enhanced first-/second-order parser.
Experiments
MSTBl and MSTB2 used the same feature setting , but used different order models.
feature set is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Feng, Vanessa Wei and Hirst, Graeme
Experiments
us to compare the model-fitting capacity of different feature sets from another perspective, especially when the training data is not sufficiently well fitted by the model.
Method
We refine Hernault et al.’s original feature set by incorporating our own features as well as some adapted from Lin et al.
Method
(2009) also incorporated contextual features in their feature set .
feature set is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Mehdad, Yashar and Negri, Matteo and Federico, Marcello
Beyond lexical CLTE
builds on two additional feature sets , derived from i) semantic phrase tables, and ii) dependency relations.
Experiments and results
(a) In both settings all the feature sets used outperform the approaches taken as terms of comparison.
Experiments and results
As shown in Table 1, the combined feature set (PT+SPT+DR) significantly5 outperforms the leXical model (64.5% vs 62.6%), while SPT and DR features separately added to PT (PT+SPT, and PT+DR) lead to marginal improvements over the results achieved by the PT model alone (about 1%).
feature set is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Tang, Hao and Keshet, Joseph and Livescu, Karen
Experiments
Models labeled X/Y use learning algorithm X and feature set Y.
Experiments
The feature set DP+ contains TF—IDF, DP alignment, dictionary, and length features.
Experiments
The results on the test fold are shown in Figure l, which compares the learning algorithms, and Figure 2, which compares feature sets .
feature set is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: