Index of papers in Proc. ACL 2012 that mention
  • SVM
Meng, Xinfan and Wei, Furu and Liu, Xiaohua and Zhou, Ming and Xu, Ge and Wang, Houfeng
Experiment
MT—SVM: We translate the English labeled data to Chinese using Google Translate and use the translation results to train the SVM classifier for Chinese.
Experiment
SVM: We train a SVM classifier on the Chinese labeled data.
Experiment
First, two monolingual SVM classifiers are trained on English labeled data and Chinese data translated from English labeled data.
Introduction
The experiment results show that CLMM yields 71% in accuracy when no Chinese labeled data are used, which significantly improves Chinese sentiment classification and is superior to the SVM and co-training based methods.
Introduction
When Chinese labeled data are employed, CLMM yields 83% in accuracy, which is remarkably better than the SVM and achieve state-of-the-art performance.
Related Work
(2002) compare the performance of three commonly used machine learning models (Naive Bayes, Maximum Entropy and SVM ).
Related Work
Ga-mon (2004) shows that introducing deeper linguistic features into SVM can help to improve the performance.
Related Work
English Labeled data are first translated to Chinese, and then two SVM classifiers are trained on English and Chinese labeled data respectively.
SVM is mentioned in 19 sentences in this paper.
Topics mentioned in this paper:
Wang, William Yang and Mayfield, Elijah and Naidu, Suresh and Dittmar, Jeremiah
Prediction Experiments
In the first experiment, we compare the prediction accuracy of our SME model to a widely used discriminative learner in NLP — the linear kernel support vector machine ( SVM )3.
Prediction Experiments
In the second experiment, in addition to the linear kernel SVM , we also compare our SME model to a state-of-the-art sparse generative model of text (Eisenstein et al., 2011a), and vary the size of input vocabulary W exponentially from 29 to the full size of our training vocabulary4.
Prediction Experiments
We use threefold cross-validation to infer the learning rate 6 and cost C hyperpriors in the SME and SVM model respectively.
Related Work
Traditional discriminative methods, such as support vector machine ( SVM ) and logistic regression, have been very popular in various text categorization tasks (J oachims, 1998; Wang and McKe-own, 2010) in the past decades.
Related Work
For example, SVM does not have latent variables to model the subtle differences and interactions of features from different domains (e.g.
SVM is mentioned in 21 sentences in this paper.
Topics mentioned in this paper:
Branavan, S.R.K. and Kushman, Nate and Lei, Tao and Barzilay, Regina
Experimental Setup
Baselines To evaluate the performance of our relation extraction, we compare against an SVM classifier8 trained on the Gold Relations.
Experimental Setup
We test the SVM baseline in a leave-one-out fashion.
Experimental Setup
Model F-score 0.4 _ ---- -- SVM F-score ---------- -- All-text F-score
Introduction
Our results demonstrate the strength of our relation extraction technique — while using planning feedback as its only source of supervision, it achieves a precondition relation extraction accuracy on par with that of a supervised SVM baseline.
Results
We also show the performance of the supervised SVM baseline.
Results
Feature Analysis Figure 7 shows the top five positive features for our model and the SVM baseline.
Results
Figure 7: The top five positive features on words and dependency types learned by our model (above) and by SVM (below) for precondition prediction.
SVM is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Chan, Wen and Zhou, Xiangdong and Wang, Wei and Chua, Tat-Seng
Experimental Results
We adapt the Support Vector Machine ( SVM ) and Logistic Regression (LR) which have been reported to be effective for classification and the Linear CRF (LCRF) which is used to summarize ordinary text documents in (Shen et al., 2007) as baselines for comparison.
Experimental Results
Table 2 shows that our general CRF model based on question segmentation with group L1 regularization outperforms the baselines significantly in all three measures (gCRF—QS-ll is 13.99% better than SVM in precision, 9.77% better in recall and 11.72% better in F1 score).
Experimental Results
We note that both SVM and LR,
Introduction
The experimental results show that the proposed model improve the performance signifi-cantly(in terms of precision, recall and F1 measures) as well as the ROUGE-l, ROUGE-2 and ROUGE-L measures as compared to the state-of-the-art methods, such as Support Vector Machines ( SVM ), Logistic Regression (LR) and Linear CRF (LCRF) (Shen et al., 2007).
SVM is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Croce, Danilo and Moschitti, Alessandro and Basili, Roberto and Palmer, Martha
Experiments
This uses a set of binary SVM classifiers, one for each verb class (frame) 73.
Experiments
In the classification phase the binary classifiers are applied by (i) only considering classes that are compatible with the target verbs; and (ii) selecting the class associated with the maximum positive SVM margin.
Model Analysis and Discussion
In line with the method discussed in (Pighin and Moschitti, 2009b), these fragments are extracted as they appear in most of the support vectors selected during SVM training.
SVM is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: