Index of papers in Proc. ACL 2010 that mention
  • cross validation
Hassan, Ahmed and Radev, Dragomir R.
Experiments
We used 10-fold cross validation for all tests.
Experiments
Table 2 compares the performance using 10-fold cross validation .
Experiments
Table 2: Accuracy for SO-PMI with different dataset sizes, the spin model, and the random walks model for 10-fold cross validation and 14 seeds.
cross validation is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Tratz, Stephen and Hovy, Eduard
Automated Classification
5.2 Cross Validation Experiments
Automated Classification
We performed 10-fold cross validation on our dataset, and, for the purpose of comparison, we also performed 5-fold cross validation on C) Séaghdha’s (2007) dataset using his folds.
Automated Classification
To assess the impact of the various features, we ran the cross validation experiments for each feature type, alternating between including only one
cross validation is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Durrani, Nadir and Sajjad, Hassan and Fraser, Alexander and Schmid, Helmut
Evaluation
This comprises 108K sentences from the data made available by the University of Leipzig4 + 5600 sentences from the training data of each fold during cross validation .
Evaluation
We perform a 5-fold cross validation taking 4/5 of the data as training and 1/5 as test data.
Evaluation
Baseline P190: We ran Moses (Koehn et al., 2007) using Koehn’s training scriptslo, doing a 5-fold cross validation with no reordering“.
cross validation is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Li, Shoushan and Huang, Chu-Ren and Zhou, Guodong and Lee, Sophia Yat Mei
Unsupervised Mining of Personal and Impersonal Views
xmm =< P(x'l=1),P(x'l=2),P(x'l=3) > In our experiments, we perform stacking with 4-fold cross validation to generate meta-training data where each fold is used as the development data and the other three folds are used to train the base classifiers in the training phase.
Unsupervised Mining of Personal and Impersonal Views
4-fold cross validation is performed for supervised sentiment classification.
Unsupervised Mining of Personal and Impersonal Views
Also, we find that our performances are similar to the ones (described as fully supervised results) reported in Dasgupta and Ng (2009) where the same data in the four domains are used and 10-fold cross validation is performed.
cross validation is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Litvak, Marina and Last, Mark and Friedman, Menahem
Experiments
We estimated the ROUGE metric using 10-fold cross validation .
Experiments
Each corpus was then subjected to 10-fold cross validation , and the average results for training and testing were calculated.
Experiments
Table 3: Results of 10-fold cross validation ENG HEB MULT Train 0.4483 0.5993 0.5205 Test 0.4461 0.5936 0.5027
cross validation is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Trogkanis, Nikolaos and Elkan, Charles
Additional experiments
Because cross validation is applied, errors are always measured on testing subsets that are disjoint from the corresponding training subsets.
Experimental design
We use tenfold cross validation for the experiments.
Experimental design
These are learning experiments so we also use tenfold cross validation in the same way as with CRF++.
cross validation is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: