Index of papers in Proc. ACL 2011 that mention
  • binary classifier
Bollegala, Danushka and Weir, David and Carroll, John
Abstract
The created thesaurus is then used to expand feature vectors to train a binary classifier .
Experiments
Because this is a binary classification task (i.e.
Experiments
We simply train a binary classifier using unigrams and bigrams as features from the labeled reviews in the source domains and apply the trained classifier on the target domain.
Experiments
After selecting salient features, the SCL algorithm is used to train a binary classifier .
Feature Expansion
Using the extended vectors d’ to represent reviews, we train a binary classifier from the source domain labeled reviews to predict positive and negative sentiment in reviews.
Introduction
Following previous work, we define cross-domain sentiment classification as the problem of learning a binary classifier (i.e.
Introduction
thesaurus to expand feature vectors in a binary classifier at train and test times by introducing related lexical elements from the thesaurus.
Introduction
(However, the method is agnostic to the properties of the classifier and can be used to expand feature vectors for any binary classifier ).
binary classifier is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Rosenthal, Sara and McKeown, Kathleen
Experiments and Results
(2006) experiment), 2. binary classification with the split at each birth year from 1975-1988 and 3.
Experiments and Results
In contrast to Schler et al.’s experiment, our division does not introduce a gap between age groups, we do binary classification , and we use significantly less data.
Experiments and Results
0 Perform binary classification between blogs BEFORE X and IN/ AFTER X
Introduction
Therefore, we experimented with binary classification into age groups using all birth dates from 1975 through 1988, thus including students from generation Y who were in college during the emergence of social media technologies.
Introduction
We find five years where binary classification is significantly more accurate than other years: 1977, 1979, and 1982-1984.
Related Work
We also use a supervised machine learning approach, but classification by gender is naturally a binary classification task, while our work requires determining a natural dividing point.
binary classifier is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Sun, Ang and Grishman, Ralph and Sekine, Satoshi
Background
Then the thresholded output of this binary classifier is used as training data for learning a multi-class classifier for the 7 relation types (Bunescu and Mooney, 2005b).
Cluster Feature Selection
When the binary classifier’s prediction probability is greater than 0.5, we take the prediction with the highest probability of the multi-class classifier as the final class label.
Feature Based Relation Extraction
Specifically, we first train a binary classifier to distinguish between relation instances and non-relation instances.
Feature Based Relation Extraction
Then rather than using the thresholded output of this binary classifier as training data, we use only the annotated relation instances to train a multi-class classifier for the 7 relation types.
Feature Based Relation Extraction
given a test instance x , we first apply the binary classifier to it for relation detection; if it is detected as a relation instance we then apply the multi-class relation classifier to classify it4.
binary classifier is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Bodenstab, Nathan and Dunlop, Aaron and Hall, Keith and Roark, Brian
Beam-Width Prediction
Given a global maximum beam-width b, we train 19 different binary classifiers , each using separate mapping functions (1)19, where the target value y produced by (13],, is 1 if Rm > k and 0 otherwise.
Beam-Width Prediction
During decoding, we assign the beam-width for chart cell spanning wi+1...wj given models 60, 61, “.654 by finding the lowest value k such that the binary classifier 6k, classifies Rm 3 k. If no such k exists, RM is set to the maximum beam-width value b:
Introduction
More generally, instead of a binary classification decision, we can also use this method to predict the desired cell population directly and get cell closure for free when the classifier predicts a beam-width of zero.
Open/Closed Cell Classification
We first look at the binary classification of chart cells as either open or closed to full constituents, and predict this value from the input sentence alone.
binary classifier is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: