Index of papers in Proc. ACL 2009 that mention
  • cross validation
Kruengkrai, Canasai and Uchimoto, Kiyotaka and Kazama, Jun'ichi and Wang, Yiou and Torisawa, Kentaro and Isahara, Hitoshi
Experiments
For the error-driven policy, we collected unidentified unknown words using 10-fold cross validation on the training set, as previously described in Section 3.
Experiments
Table 9: Comparison of averaged F1 results (by 10-fold cross validation ) with previous studies on CTB 3.0.
Experiments
Unfortunately, Zhang and Clark’s experimental setting did not allow us to use our error-driven policy since performing 10-fold cross validation again on each main cross validation trial is computationally too expensive.
Policies for correct path selection
0 Divide the training corpus into ten equal sets and perform 10-fold cross validation to find the errors.
Policies for correct path selection
After ten cross validation runs, we get a list of the unidentified unknown words derived from the whole training corpus.
Policies for correct path selection
Note that the unidentified unknown words in the cross validation are not necessary to be infrequent words, but some overlap may exist.
cross validation is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Persing, Isaac and Ng, Vincent
Baseline Approaches
This amounts to using three folds for training and one fold for development in each cross validation experiment.
Dataset
Since we will perform 5-fold cross validation in our experiments, we also show the number of reports labeled with each shaper under the “F” columns for each fold.
Evaluation
Micro-averaged 5-fold cross validation results of this baseline for all 14 shapers and for just 10 minority classes (due to our focus on improving minority class prediction) are expressed as percentages in terms of precision (P), recall (R), and F-measure (F) in the first row of Table 4.
Evaluation
Table 4: 5-fold cross validation results.
Our Bootstrapping Algorithm
Whichever baseline is used, we need to reserve one of the five folds to tune the parameter k in our cross validation experiments.
cross validation is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Niu, Zheng-Yu and Wang, Haifeng and Wu, Hua
Experiments of Parsing
Here we tried the corpus weighting technique for an optimal combination of CTB, CDTfs and parsed PDC, and chose the relative weight of both CTB and CDTfs as 10 by cross validation on the development set.
Our Two-Step Solution
The number of removed trees will be determined by cross validation on development set.
Our Two-Step Solution
The value of A will be tuned by cross validation on development set.
cross validation is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Sun, Xu and Okazaki, Naoaki and Tsujii, Jun'ichi
Recognition as a Generation Task
(2008), we perform 10-fold cross validation .
Results and Discussion
Table 5: Results of English abbreviation generation with fivefold cross validation .
Results and Discussion
Concerning the training time in the cross validation , we simply chose the DPLVM for comparison.
cross validation is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: