Index of papers in Proc. ACL 2013 that mention
  • binary classification
Yancheva, Maria and Rudzicz, Frank
Discussion and future work
While past research has used logistic regression as a binary classifier (Newman et al., 2003), our experiments show that the best-performing classifiers allow for highly nonlinear class boundaries; SVM and RF models achieve between 62.5% and 91.7% accuracy across age groups — a significant improvement over the baselines of LR and NB, as well as over previous results.
Related Work
Further, the use of binary classification schemes in previous work does not account for partial truths often encountered in real-life scenarios.
Results
The SVM is a parametric binary classifier that provides highly nonlinear decision boundaries given particular kernels.
Results
5.1 Binary classification across all data
Results
5.2 Binary classification by age group
binary classification is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Zhu, Jun and Zheng, Xun and Zhang, Bo
Experiments
4.1 Binary classification
Experiments
3 shows the performance of gSLDA+ with different burn-in steps for binary classification .
Experiments
burn-in steps for binary classification .
Logistic Supervised Topic Models
We consider binary classification with a training set D = {(wd, yd)}dD=1, where the response variable Y takes values from the output space 3/ = {0, 1}.
Logistic Supervised Topic Models
loss (Rosasco et al., 2004) in the task of fully observed binary classification .
binary classification is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Aker, Ahmet and Paramita, Monica and Gaizauskas, Rob
Abstract
For classification we use an SVM binary classifier and training data taken from the EUROVOC thesaurus.
Conclusion
In this paper we presented an approach to align terms identified by a monolingual term extractor in bilingual comparable corpora using a binary classifier .
Feature extraction
To align or map source and target terms we use an SVM binary classifier (J oachims, 2002) with a linear kernel and the tradeoff between training error and margin parameter c = 10.
Method
We then treat term alignment as a binary classification task, i.e.
Method
For classification purposes we use an SVM binary classifier .
Related Work
However, it naturally lends itself to being viewed as a classification task, assuming a symmetric approach, since the different information sources mentioned above can be treated as features and each source-target language potential term pairing can be treated as an instance to be fed to a binary classifier which decides whether to align them or not.
binary classification is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Xie, Boyi and Passonneau, Rebecca J. and Wu, Leon and Creamer, Germán G.
Abstract
Our experiments test multiple text representations on two binary classification tasks, change of price and polarity.
Experiments
Both tasks are treated as binary classification problems.
Experiments
gested as one of the best methods to summarize into a single value the confusion matrix of a binary classification task (Jurman and Furlanello, 2010; Baldi et al., 2000).
Introduction
Our experiments test several document representations for two binary classification tasks, change of price and polarity.
Related Work
Our two binary classification tasks for news, price change and polarity, are analogous to their activity and direction.
binary classification is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Hasegawa, Takayuki and Kaji, Nobuhiro and Yoshinaga, Naoki and Toyoda, Masashi
Experiments
Table 6 lists the number of utterance-response pairs used to train eight binary classifiers for individual emotional categories, which form a one-versus-the rest classifier for the prediction task.
Predicting Addressee’s Emotion
Although a response could elicit multiple emotions in the addressee, in this paper we focus on predicting the most salient emotion elicited in the addressee and cast the prediction as a single-label multi-class classification problem.5 We then construct a one-versus-the-rest classifier6 by combining eight binary classifiers , each of which predicts whether the response elicits each emotional category.
Predicting Addressee’s Emotion
We use online passive-aggressive algorithm to train the eight binary classifiers .
Predicting Addressee’s Emotion
Since the rule-based approach annotates utterances with emotions only when they contain emotional expressions, we independently train for each emotional category a binary classifier that estimates the addresser’s emotion from her/his utterance and apply it to the unlabeled utterances.
binary classification is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Persing, Isaac and Ng, Vincent
Error Classification
To solve this problem, we train five binary classifiers , one for each error type, using a one-versus-all scheme.
Error Classification
So in the binary classification problem for identifying error 6,, we create one training instance from each essay in the training set, labeling the instance as positive if the essay has 6, as one of its labels, and negative otherwise.
Error Classification
After creating training instances for error 6,, we train a binary classifier , 19,-, for identifying which test essays contain error 61-.
Evaluation
Let tpi be the number of test essays correctly labeled as positive by error ei’s binary classifier 1),; pi be the total number of test essays labeled as positive by 1),; and g,- be the total number of test essays that belong to 6,- according to the gold standard.
binary classification is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Lan, Man and Xu, Yu and Niu, Zhengyu
Experiments and Results
Although previous work has been done on PDTB (Pitler et al., 2009) and (Lin et al., 2009), we cannot make a direct comparison with them because various experimental conditions, such as, different classification strategies (multi-class classification, multiple binary classification ), different data preparation (feature extraction and selection), different benchmark data collections (different sections for training and test, different levels of discourse relations), different classifiers with various parameters (MaxEnt, Na‘1've Bayes, SVM, etc) and
Implementation Details of Multitask Learning Method
Specifically, we adopt multiple binary classification to build model for main task.
Implementation Details of Multitask Learning Method
That is, for each discourse relation, we build a binary classifier .
binary classification is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Pilehvar, Mohammad Taher and Jurgens, David and Navigli, Roberto
Experiment 3: Sense Similarity
(2007) considered sense grouping as a binary classification task whereby for each word every possible pairing of senses has to be classified
Experiment 3: Sense Similarity
We constructed a simple threshold-based classifier to perform the same binary classification .
Experiment 3: Sense Similarity
For a binary classification task, we can directly calculate precision, recall and F-score by constructing a contingency table.
binary classification is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Plank, Barbara and Moschitti, Alessandro
Experimental Setup
We treat relation extraction as a multi-class classification problem and use SVM-light-TK4 to train the binary classifiers .
Experimental Setup
To estimate the importance weights, we train a binary classifier that distinguishes between source and target domain instances.
Related Work
We will use a binary classifier trained on RE instance representations.
binary classification is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Yang, Bishan and Cardie, Claire
Introduction
We model entity identification as a sequence tagging problem and relation extraction as binary classification .
Model
We treat the relation extraction problem as a combination of two binary classification problems: opinion-arg classification, which decides whether a pair consisting of an opinion candidate 0 and an argument candidate a forms a relation; and opinion-implicit-arg classification, which decides whether an opinion candidate 0 is linked to an implicit argument, i.e.
Results
By using binary classifiers to predict relations, CRF+RE produces high precision on opinion and target extraction but also results in very low recall.
binary classification is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: