Answer Grading System | Using each of these as features, we use Support Vector Machines ( SVM ) to produce a combined real-number grade. |
Answer Grading System | The weight vector u is trained to optimize performance in two scenarios: Regression: An SVM model for regression (SVR) is trained using as target function the grades assigned by the instructors. |
Answer Grading System | Ranking: An SVM model for ranking (SVMRank) is trained using as ranking pairs all pairs of student answers (AS,At) such that grade(Az-,AS) > grade(Az-,At), where A,- is the corresponding instructor answer. |
Discussion and Conclusions | The correlation for the BOW-only SVM model for SVMRank improved upon the best BOW feature |
Discussion and Conclusions | Likewise, using the BOW-only SVM model for SVR reduces the RMSE by .022 overall compared to the best BOW feature. |
Results | 5.4 SVM Score Grading |
Results | The SVM components of the system are run on the full dataset using a 12-fold cross validation. |
Results | Both SVM models are trained using a linear kernel.11 Results from both the SVR and the SVMRank implementations are reported in Table 7 along with a selection of other measures. |
Authorship Attribution With LOWBOW Representations | For both types of representations we consider an SVM classifier under the one-vs-all formulation for facing the AA problem. |
Authorship Attribution With LOWBOW Representations | We consider SVM as base classifier because this method has proved to be very effective in a large number of applications, including AA (Houvardas and Stamatatos, 2006; Plakias and Stamatatos, 2008b; Plakias and Stamatatos, 2008a); further, since SVMs are kernel-based methods, they allow us to use local histograms for AA by considering kernels that work over sets of histograms. |
Authorship Attribution With LOWBOW Representations | We build a multiclass SVM classifier by considering the pairs of patterns-outputs associated to documents-authors. |
Experiments and Results | All our experiments use the SVM implementation provided by Canu et al. |
Experiments and Results | Columns show the true author for test documents and rows show the authors predicted by the SVM . |
Experiments and Results | The SVM with BOW representation of character n-grams achieved recognition rates of 40% and 50% for BL and JM respectively. |
Related Work | applied to this problem, including support vector machine ( SVM ) classifiers (Houvardas and Stamatatos, 2006) and variants thereon (Plakias and Stamatatos, 2008b; Plakias and Stamatatos, 2008a), neural networks (Tearle et al., 2008), Bayesian classifiers (Coyotl-Morales et al., 2006), decision tree methods (Koppel et al., 2009) and similarity based techniques (Keselj et al., 2003; Lambers and Veenman, 2009; Stamatatos, 2009b; Koppel et al., 2009). |
Related Work | In this work, we chose an SVM classifier as it has reported acceptable performance in AA and because it will allow us to directly compare results with previous work that has used this same classifier. |
Experimental Setup 4.1 Data Sets and Preprocessing | SVM: This method learns an SVM classifier for each language given the monolingual labeled data; the unlabeled data is not used. |
Experimental Setup 4.1 Data Sets and Preprocessing | Monolingual TSVM (TSVM-M): This method learns two transductive SVM (TSVM) classifiers given the monolingual labeled data and the monolingual unlabeled data for each language. |
Experimental Setup 4.1 Data Sets and Preprocessing | First, two monolingual SVM classifiers are built based on only the corresponding labeled data, and then they are bootstrapped by adding the most confident predicted examples from the unlabeled data into the training set. |
Introduction | maximum entropy and SVM classifiers) as well as two alternative methods for leveraging unlabeled data (transductive SVMs (Joachims, 1999b) and co-training (Blum and Mitchell, 1998)). |
Results and Analysis | Among the baselines, the best is Co-SVM; TSVMs do not always improve performance using the unlabeled data compared to the standalone SVM ; and TSVM-B outperforms TSVM-M except for Chinese in the second setting. |
Results and Analysis | 8Significance is tested using paired t-tests with p<0.05: denotes statistical significance compared to the corresponding performance of MaXEnt; * denotes statistical significance compared to SVM ; and r denotes statistical significance compared to Co-SVM. |
Approach | We use Support Vector Machines ( SVM ) with linear kernel as our classifier. |
Approach | We use SVM with linear kernel as our classifier. |
Evaluation | Sentence Filtering Evaluation: We used Support Vector Machines ( SVM ) with linear kernel as our classifier. |
Evaluation | Sentence Classification Evaluation: We used SVM in this step as well. |
Evaluation | Author Name Replacement Evaluation: The classifier used in this task is also SVM . |
Experimental Settings | The PZD system relies on a set of SVM classifiers trained using morphological and lexical features. |
Experimental Settings | The SVM classifiers are built using Yamcha (Kudo and Matsumoto, 2003). |
Experimental Settings | Simple features are used directly by the PZD SVM models, whereas Binned features’ (numerical) values are reduced to a small, labeled category set whose labels are used as model features. |
Approach Overview | In each of the first two steps, a binary SVM classifier is built to perform the classification. |
Experiments | In the experiments, we consider the positive and negative tweets annotated by humans as subjective tweets (i.e., positive instances in the SVM classifiers), which amount to 727 tweets. |
Related Work | According to the experimental results, machine learning based classifiers outperform the unsupervised approach, where the best performance is achieved by the SVM classifier with unigram presences as features. |
Related Work | In contrast, (Barbosa and Feng, 2010) propose a two-step approach to classify the sentiments of tweets using SVM classifiers with abstract features. |
Abstract | We applied a modified version of HITS algorithm and an SVM classifier trained with pseudo-relevant data for article analysis. |
Disputant relation-based method | As for the rest of the sentences, a similarity analysis is conducted with an SVM classifier. |
Disputant relation-based method | where SU: number of all sentences of the article Qi: number of quotes from the side i. Qij: number of quotes from either side i or j. Si: number of sentences classified to i by SVM . |
Introduction | We applied a modified version of HITS algorithm to identify the key opponents of an issue, and used disputant extraction techniques combined with an SVM classifier for article analysis. |
Structured Learning | We use a soft-margin support vector machine ( SVM ) (Vapnik, 1998) objective over the full structured output space (Taskar et al., 2003; Tsochantaridis et al., 2004) of extractive and compressive summaries: |
Structured Learning | In our application, this approach efficiently solves the structured SVM training problem up to some specified tolerance 6. |
Structured Learning | Thus, if loss-augmented prediction turns up no new constraints on a given iteration, the current solution to the reduced problem, w and E, is the solution to the full SVM training problem. |
Abstract | Oberlander and Nowson explore using a Na'ive Bayes and an SVM classifier to perform binary classification of text on each personality dimension. |
Abstract | The results of the SVM classifier, shown in line (1) of Table 2, were fairly poor. |
Abstract | Training a multiclass SVM on the binned n-gram features from (5) produces 51.6% cross-validation accuracy on training data and 44.4% accuracy on the weighted test set (both numbers should be compared to a 33% baseline). |
Experiments and Results | We experimented with an SVM classifier and found logistic regression to do slightly better. |
Related Work | They use an SVM classifier with only n-grams as features. |
Related Work | Nowson et al (2006) employed dictionary and n—gram based content analysis and achieved 91.5% accuracy using an SVM classifier. |
Approach | In its basic form, a binary SVM classifier learns a linear threshold function that discriminates data points of two categories. |
Evaluation | We trained a SVM regression model with our full set of feature types and compared it to the SVM rank preference model. |
Introduction | In this paper, we report experiments on rank preference Support Vector Machines (SVMs) trained on a relatively small amount of data, on identification of appropriate feature types derived automatically from generic text processing tools, on comparison with a regression SVM model, and on the robustness of the best model to ‘outlier’ texts. |