Abstract | Experiments are carried out into the CW identification techniques of: simplifying everything, frequency thresholding and training a support vector machine. |
Abstract | The support vector machine achieves a slight increase in precision over the other two methods, but at the cost of a dramatic trade off in recall. |
Experimental Design | These were implemented as well as a support vector machine classifier. |
Experimental Design | 2.6 Support Vector Machine |
Experimental Design | Support vector machines (SVM) are statistical classifiers which use labelled training data to predict the class of unseen inputs. |
Introduction | o The implementation of a support vector machine for the classification of CWs. |
Introduction | 0 An analysis of the features used in the support vector machine. |
Related Work | Support vector machines are powerful statistical classifiers, as employed in the ‘SVM’ method of this paper. |
Related Work | A Support Vector Machine is used to predict the familiarity of CWs in Zeng et al. |
Experimental Evaluation and Discussion | Figure 11: Changes of number of support vectors in sentence-Wise active learning |
Experimental Evaluation and Discussion | Figure 12: Changes of number of support vectors in chunk-Wise active learning (MODSIMPLE) |
Experimental Evaluation and Discussion | Stopping Criteria It is known that increment rate of the number of support vectors in SVM indicates saturation of accuracy improvement during iterations of active learning (Schohn and Cohn, 2000). |
Abstract | We represent words as sequences of substrings, and use the substrings as features in a Support Vector Machine (SVM) ranker, which is trained to rank possible stress patterns. |
Automatic Stress Prediction | We use a support vector machine (SVM) to rank the possible patterns for each sequence (Section 3.2). |
Automatic Stress Prediction | Table l: The steps in our stress prediction system (with orthographic and phonetic prediction examples): (1) word splitting, (2) support vector ranking of stress patterns, and (3) pattem-to-vowel |
Automatic Stress Prediction | We adopt a Support Vector Machine (SVM) solution to these ranking constraints as described by J oachims (2002). |
Introduction | We divide each word into a sequence of substrings, and use these substrings as features for a Support Vector Machine (SVM) ranker. |
Background | 3.3 Support vector machines |
Background | Support vector machines (SVMs) are pattern classification methods that aim to find an optimal separating hyperplane between examples from two different classes (Shawe-Taylor and Cristianini, 2004). |
Background | that is, a linear function over (a subset of) training examples, where 04,- is the weight associated with training example 2' (those for which a, > 0 are the so called support vectors ) and y,- is the label associated with training example i, K (xi, xj) is a kernel2 function that aims at mapping the input vectors, (xi, xj), into the so called feature space, and b is a bias term. |
Introduction | 0 We study several kernels for a support vector machine AA classifier under the local histograms formulation. |
Related Work | applied to this problem, including support vector machine (SVM) classifiers (Houvardas and Stamatatos, 2006) and variants thereon (Plakias and Stamatatos, 2008b; Plakias and Stamatatos, 2008a), neural networks (Tearle et al., 2008), Bayesian classifiers (Coyotl-Morales et al., 2006), decision tree methods (Koppel et al., 2009) and similarity based techniques (Keselj et al., 2003; Lambers and Veenman, 2009; Stamatatos, 2009b; Koppel et al., 2009). |
Abstract | In this work, we evaluate various parameterizations of five classifiers (including support vector machines, neural networks, and random forests) in deciphering truth from lies given transcripts of interviews with 198 victims of abuse between the ages of 4 and 7. |
Abstract | Our results show that sentence length, the mean number of clauses per utterance, and the Stajner—Mitkov measure of complexity are highly informative syntactic features, that classification accuracy varies greatly by the age of the speaker, and that accuracy up to 91.7% can be achieved by support vector machines given a sufficient amount of data. |
Related Work | Two classifiers, Nai've Bayes (NB) and a support vector machine (SVM), were applied on the tokenized and stemmed statements to obtain best classification accuracies of 70% (abortion topic, NB), 67.4% (death penalty topic, NB), and 77% (friend description, SVM), where the baseline was taken to be 50%. |
Related Work | (2006) combined two independent systems — an acoustic Gaussian mixture model based on Mel cepstral features, and a prosodic support vector machine based on features such as pitch, energy, and duration — and achieved an accuracy of 64.4% on a test subset of the Columbia-SRI—Colorado (CSC) corpus of deceptive and non-deceptive speech (Hirschberg et al., 2005). |
Results | We evaluate five classifiers: logistic regression (LR), a multilayer perceptron (MLP), nai've Bayes (NB), a random forest (RF), and a support vector machine (SVM). |
Abstract | Then, we design advanced similarity functions between such structures, i.e., semantic tree kernel functions, for exploiting distributional and grammatical information in Support Vector Machines. |
Introduction | The nice property of kernel functions is that they can be used in place of the scalar product of feature vectors to train algorithms such as Support Vector Machines (SVMs). |
Model Analysis and Discussion | In line with the method discussed in (Pighin and Moschitti, 2009b), these fragments are extracted as they appear in most of the support vectors selected during SVM training. |
Model Analysis and Discussion | the underlying support vectors ) confirm very interesting grammatical generalizations, i.e. |
Abstract | We introduce a novel tree representation, and use it to train predictive models with tree kernels using support vector machines. |
Conclusion | Our semantic frame-based model benefits from tree kernel learning using support vector machines. |
Experiments | Models are constructed using linear kernel support vector machines for both classification tasks. |
Methods | The second is a tree representation that encodes semantic frame features, and depends on tree kernel measures for support vector machine classification. |
Large-Margin Learning Framework | As we will see, it is possible to learn {Wm} using standard support vector machine (SVM) training (holding A fixed), and then make a simple gradient-based update to A (holding {Wm} fixed). |
Large-Margin Learning Framework | One property of the dual variables is that f (vi; A) is a support vector only if the dual variable nwi < 1. |
Large-Margin Learning Framework | In other words, the contribution from non support vectors to the projection matrix A is 0. |
Abstract | Our method is based on recent advances in the field of statistical machine learning (multivariate capabilities of Support Vector Machines) and a rich feature space. |
Building a Discourse Parser | 2.2 Support Vector Machines |
Building a Discourse Parser | Support Vector Machines (SVM) (Vapnik, 1995) are used to model classifiers S and L. SVM refers to a set of supervised learning algorithms that are based on margin maximization. |
Prediction Experiments | In the first experiment, we compare the prediction accuracy of our SME model to a widely used discriminative learner in NLP — the linear kernel support vector machine (SVM)3. |
Prediction Experiments | Table 1: Compare the accuracy of the linear kernel support vector machine to our sparse mixed-effects model in the region and time identification tasks (K = 25). |
Related Work | Traditional discriminative methods, such as support vector machine (SVM) and logistic regression, have been very popular in various text categorization tasks (J oachims, 1998; Wang and McKe-own, 2010) in the past decades. |
Abstract | We applied an error detection and correction technique to the results of positive and negative documents classified by the Support Vector Machines (SVM). |
Framework of the System | As error candidates, we focus on support vectors (SVs) extracted from the training documents by SVM. |
Framework of the System | - D \SV ( Support vectors ) Extraction |
Content Selection | A discriminative classifier is trained for this purpose based on Support Vector Machines (SVMs) (Joachims, 1998) with an RBF kernel. |
Experimental Setup | We also compare our approach to two supervised extractive summarization methods — Support Vector Machines (J oachims, 1998) trained with the same fea- |
Surface Realization | We utilize a discriminative ranker based on Support Vector Regression (SVR) (Smola and Scho'lkopf, 2004) to rank the generated abstracts. |
Evaluation framework | Aggressive Perceptron (Crammer et al., 2006),9 by comparing their performance with a batch learning strategy based on the Scikit-learn implementation of Support Vector Regression (SVR).10 |
Evaluation framework | If the point is identified as a support vector , the parameters of the model are updated. |
Evaluation framework | In contrast with OSVR, which keeps track of the most important points seen in the past ( support vectors ), the update of the weights is done without considering the previously processed iI instances. |