Index of papers in Proc. ACL 2008 that mention
  • SVM
Bartlett, Susan and Kondrak, Grzegorz and Cherry, Colin
Introduction
We formulate syllabification as a tagging problem, and learn a discriminative tagger from labeled data using a structured support vector machine ( SVM ) (Tsochantaridis et al., 2004).
Structured SVMs
A structured support vector machine ( SVM ) is a large-margin training method that can learn to predict structured outputs, such as tag sequences or parse trees, instead of performing binary classification (Tsochantaridis et al., 2004).
Structured SVMs
We employ a structured SVM that predicts tag sequences, called an SVM Hidden Markov Model, or SVM-HMM.
Structured SVMs
This approach can be considered an SVM because the model parameters are trained discrimi-natively to separate correct tag sequences from incorrect ones by as large a margin as possible.
Syllabification with Structured SVMs
The SVM framework is less restrictive: we can include 0 as an emission feature, but we can also include features indicating that the preceding and following letters are m and r respectively.
SVM is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Andreevskaia, Alina and Bergler, Sabine
Domain Adaptation in Sentiment Research
They applied an out-of-domain-trained SVM classifier to label examples from the target domain and then retrained the classifier using these new examples.
Domain Adaptation in Sentiment Research
Depending on the similarity between domains, this method brought up to 15% gain compared to the baseline SVM .
Experiments
Dataset Movie News Blogs PRs Dataset size 1066 800 800 1200 unigrams SVM 68.5 61.5 63.85 76.9 NB 60.2 59.5 60.5 74.25 nb features 5410 4544 3615 2832 bigrams SVM 59.9 63.2 61.5 75.9 NB 57.0 58.4 59.5 67.8 nb features 16286 14633 15182 12951 trigrams SVM 54.3 55.4 52.7 64.4 NB 53.3 57.0 56.0 69.7 nb features 20837 18738 19847 19132
Experiments
Table 4: Accuracy of SVM with unigram model
Experiments
results depends on the genre and size of the n-gram: on product reviews, all results are statistically significant at oz 2 0.025 level; on movie reviews, the difference between NaVe Bayes and SVM is statistically significant at oz 2 0.01 but the significance diminishes as the size of the n- gram increases; on news, only bigrams produce a statistically significant (a = 0.01) difference between the two machine learning methods, while on blogs the difference between SVMs and NaVe Bayes is most pronounced when unigrams are used (a = 0.025).
Factors Affecting System Performance
To our knowledge, the only work that describes the application of statistical classifiers ( SVM ) to sentence-level sentiment classification is (Gamon and Aue, 2005)1.
Integrating the Corpus-based and Dictionary-based Approaches
Using then an SVM meta-classifier trained on a small number of target domain examples to combine the nine base classifiers, they obtained a statistically significant improvement on out-of-domain texts from book reviews, knowledge-base feedback, and product support services survey data.
Lexicon-Based Approach
The baseline performance of the Lexicon-Based System (LBS) described above is presented in Table 5, along with the performance results of the in-domain- and out-of-domain-trained SVM classifier.
Lexicon-Based Approach
Movies ‘ News ‘ Blogs ‘ PRs LBS 57.5 62.3 63.3 59.3 SVM in-dom.
Lexicon-Based Approach
68.5 61.5 63.85 76.9 SVM out-of-dom.
SVM is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Ding, Shilin and Cong, Gao and Lin, Chin-Yew and Zhu, Xiaoyan
Context and Answer Detection
SVM , can be employed, where each pair of question and candidate context will be treated as an instance.
Experiments
Model H Prec(%) l Rec(%) 1 F1(%) 1 Context Detection SVM 75.27 68.80 71.32 C4.5 70.16 64.30 67.21 L—CRF 75.75 72.84 74.45 Answer Detection SVM 73.31 47.35 57.52 C4.5 65.36 46.55 54.37 L—CRF 63.92 58.74 61.22
Experiments
This experiment is to evaluate Linear CRF model (Section 3.1) for context and answer detection by comparing with SVM and C4.5(Quinlan, 1993).
Experiments
For SVM , we use SVMlightanchims, 1999).
Introduction
Experimental results show that 1) Linear CRFs outperform SVM and decision tree in both context and answer detection; 2) Skip-chain CRFs outperform Linear CRFs for answer finding, which demonstrates that context improves answer finding; 3) 2D CRF model improves the performance of Linear CRFs and the combination of 2D CRFs and Skip-chain CRFs achieves better performance for context detection.
Related Work
(2007) used SVM to extract input-reply pairs from forums for chatbot knowledge.
SVM is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Wang, Qin Iris and Schuurmans, Dale and Lin, Dekang
Conclusion and Future Work
Unlike previous proposed approaches, we introduce a convex objective for the semi-supervised learning algorithm by combining a convex structured SVM loss and a convex least square loss.
Introduction
In particular, they present an algorithm for multi-class unsupervised and semi-supervised SVM learning, which relaxes the original non-convex objective into a close convex approximation, thereby allowing a global solution to be obtained.
Introduction
More specifically, for the loss on the unlabeled data part, we substitute the original unsupervised structured SVM loss with a least squares loss, but keep constraints on the inferred prediction targets, which avoids trivialization.
Introduction
ing semi-supervised convex objective to dependency parsing, and obtain significant improvement over the corresponding supervised structured SVM .
Semi-supervised Convex Training for Structured SVM
Although semi-supervised structured SVM learning has been an active research area, semi-supervised structured SVMs have not been used in many real applications to date.
Semi-supervised Convex Training for Structured SVM
By combining the convex structured SVM loss on labeled data (shown in Equation (5)) and the convex least squares loss on unlabeled data (shown in Equation (8)), we obtain a semi-supervised structured large margin loss
Semi-supervised Structured Large Margin Objective
The objective of standard semi-supervised structured SVM is a combination of structured large margin losses on both labeled and unlabeled data.
SVM is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
HaCohen-Kerner, Yaakov and Kass, Ariel and Peretz, Ariel
Abbreviation Disambiguation
: Maximum Entropy, SVM and C50.
Abstract
An accuracy of 96.09% has been achieved by SVM .
Experiments
Several well-known supervised ML methods have been selected: artificial neural networks (ANN), Nai've Bayes (NB), Support Vector Machines ( SVM ) and J48 (Witten and Frank, 1999) an improved variant of the C4.5 decision tree induction.
Experiments
Table 2 shows that SVM achieved the best result with 96.09% accuracy.
Experiments
ants ML Method ANN NB SVM J48
SVM is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Mairesse, François and Walker, Marilyn
Parameter Estimation Models
Continuous parameters are modeled with a linear regression model (LR), an M5’ model tree (M5), and a model based on support vector machines with a linear kernel ( SVM ).
Parameter Estimation Models
We test a Naive Bayes classifier (NB), a j48 decision tree (J48), a nearest-neighbor classifier using one neighbor (NN), a Java implementation of the RIPPER rule-based learner (J RIP), the AdaBoost boosting algorithm (ADA), and a support vector machines classifier with a linear kernel ( SVM ).
Parameter Estimation Models
Figure 3: SVM model with a linear kernel predicting the CONTENT POLARITY parameter.
SVM is mentioned in 6 sentences in this paper.
Topics mentioned in this paper: