Index of papers in Proc. ACL 2008 that mention
  • development set
Bartlett, Susan and Kondrak, Grzegorz and Cherry, Colin
Syllabification Experiments
Recall that 5K training examples were held out as a development set .
Syllabification with Structured SVMs
We use a linear kernel, and tune the SVM’s cost parameter on a development set .
Syllabification with Structured SVMs
While developing our tagging schemes and feature representation, we used a development set of 5K words held out from our CELEX training data.
Syllabification with Structured SVMs
Our development set experiments suggested that numbering ONC tags increases their performance.
development set is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Bender, Emily M.
Wambaya grammar
In addition, this grammar has relatively low ambiguity, assigning on average 11.89 parses per item in the development set .
Wambaya grammar
Of the 92 sentences in this text, 20 overlapped with items in the development set , so the
Wambaya grammar
The parsed portion of the development set (732 items) constitutes a sufficiently large corpus to train a parse selection model using the Redwoods disambiguation technology (Toutanova et al., 2005).
development set is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Ji, Heng and Grishman, Ralph
Experimental Results and Analysis
We used 10 newswire texts from ACE 2005 training corpora (from March to May of 2003) as our development set , and then conduct blind test on a separate set of 40 ACE 2005 newswire texts.
Experimental Results and Analysis
We select the thresholds (d with k=1~13) for various confidence metrics by optimizing the F-measure score of each rule on the development set , as shown in Figure 2 and 3 as follows.
Experimental Results and Analysis
The labeled point on each curve shows the best F-measure that can be obtained on the development set by adjusting the threshold for that rule.
development set is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Blunsom, Phil and Cohn, Trevor and Osborne, Miles
Discussion and Further Work
_ tences ( development set )
Evaluation
A comparison on the impact of accounting for all derivations in training and decoding ( development set ).
Evaluation
The effect of the beam width (log-scale) on max-translation decoding ( development set ).
Evaluation
An informal comparison of the outputs on the development set , presented in Table 4, suggests that the
development set is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Feng, Yansong and Lapata, Mirella
BBC News Database
the kernel whose value is optimized on the development set .
BBC News Database
where 0t is a smoothing parameter tuned on the development set , sa is the annotation for the latent variable 5 and sd its corresponding document.
BBC News Database
where ,u is a smoothing parameter estimated on the development set , lama is a Boolean variable denoting whether w appears in the annotation sa, and Nw is the number of latent variables that contain w in their annotations.
development set is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Ganchev, Kuzman and Graça, João V. and Taskar, Ben
Phrase-based machine translation
For each alignment model and decoding type we train Moses and use MERT optimization to tune its parameters on a development set .
Phrase-based machine translation
For Hansards we randomly chose 1000 and 500 sentences from test 1 and test 2 to be testing and development sets respectively.
Phrase-based machine translation
In principle, we would like to tune the threshold by optimizing BLEU score on a development set , but that is impractical for experiments with many pairs of languages.
Word alignment results
Figure 7 shows an example of the same sentence, using the same model where in one case Viterbi decoding was used and in the other case Posterior decoding tuned to minimize AER on a development set
development set is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Branavan, S.R.K. and Chen, Harr and Eisenstein, Jacob and Barzilay, Regina
Experimental Setup
Training Our model needs to be provided with the number of clusters K. We set K large enough for the model to learn effectively on the development set .
Experimental Setup
A threshold for this proportion is set for each property via the development set .
Model Description
Properties with proportions above a set threshold (tuned on a development set ) are predicted as being supported.
development set is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Davidov, Dmitry and Rappoport, Ari
Corpora and Parameters
We used part of the Russian corpus as a development set for determining the parameters.
Corpora and Parameters
On our development set we have tested various parameter settings.
Corpora and Parameters
In our experiments we have used the following values (again, determined using a development set ) for these parameters: F0: 1,000 words per million (wpm); FH: 100 wpm; FB: 1.2 wpm; N: 500 words; W: 5 words; L: 30%; S: 2/3; 04: 0.1.
development set is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Deng, Yonggang and Xu, Jia and Gao, Yuqing
A Generic Phrase Training Procedure
In the final step 4 (line 15), parameters {Am 7'} are discriminatively trained on a development set using the downhill simplex method (Nelder and Mead, 1965).
Discussions
We use feature functions to decide the order and the threshold 7' to locate the boundary guided with a development set .
Introduction
A significant deviation from most other approaches is that the framework is parameterized and can be optimized jointly with the decoder to maximize translation performance on a development set .
development set is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Espinosa, Dominic and White, Michael and Mehay, Dennis
Results and Discussion
The the whole feature set was found in feature ablation testing on the development set to outperform all other feature subsets significantly (p < 2.2 - 10—16).
Results and Discussion
The development set (00) was used to tune the 6 parameter to obtain reasonable hypertag ambiguity levels; the model was not otherwise tuned to it.
Results and Discussion
limit; on the development set , this improvement eli-mates more than the number of known search errors (cf.
development set is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Huang, Liang
Experiments
We use the standard split of the Treebank: sections 02-21 as the training data (39832 sentences), section 22 as the development set (1700 sentences), and section 23 as the test set (2416 sentences).
Experiments
The development set and the test set are parsed with a model trained on all 39832 training sentences.
Experiments
We use the development set to determine the optimal number of iterations for averaged perceptron, and report the F1 score on the test set.
development set is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Koo, Terry and Carreras, Xavier and Collins, Michael
Experiments
The English experiments were performed on the Penn Treebank (Marcus et al., 1993), using a standard set of head-selection rules (Yamada and Matsumoto, 2003) to convert the phrase structure syntax of the Treebank to a dependency tree representation.6 We split the Treebank into a training set (Sections 2—21), a development set (Section 22), and several test sets (Sections 0,7 1, 23, and 24).
Experiments
of iterations of perceptron training, we performed up to 30 iterations and chose the iteration which optimized accuracy on the development set .
Experiments
Table 6: Parent-prediction accuracies of unlabeled Czech parsers on the PDT 1.0 development set .
development set is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: