Index of papers in Proc. ACL 2008 that mention
  • perceptron
Huang, Liang
Conclusion
With efficient approximate decoding, perceptron training on the whole Treebank becomes practical, which can be done in about a day even with a Python implementation.
Experiments
This result confirms that our feature set design is appropriate, and the averaged perceptron learner is a reasonable candidate for reranking.
Experiments
We use the development set to determine the optimal number of iterations for averaged perceptron , and report the F1 score on the test set.
Experiments
column is for feature extraction, and training column shows the number of perceptron iterations that achieved best results on the dev set, and average time per iteration.
Forest Reranking
3.1 Generic Reranking with the Perceptron
Forest Reranking
In this work we use the averaged perceptron algorithm (Collins, 2002) since it is an online algorithm much simpler and orders of magnitude faster than Boosting and MaxEnt methods.
Forest Reranking
Shown in Pseudocode l, the perceptron algorithm makes several passes over the whole training data, and in each iteration, for each sentence 3,, it tries to predict a best parse 3),- among the candidates cand(si) using the current weight setting.
Introduction
his parser, and Wenbin Jiang for guidance on perceptron averaging.
perceptron is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Koo, Terry and Carreras, Xavier and Collins, Michael
Experiments
We trained the parsers using the averaged perceptron (Freund and Schapire, 1999; Collins, 2002), which represents a balance between strong performance and fast training times.
Experiments
of iterations of perceptron training, we performed up to 30 iterations and chose the iteration which optimized accuracy on the development set.
Experiments
12Due to the sparsity of the perceptron updates, however, only a small fraction of the possible features were active in our trained models.
perceptron is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Surdeanu, Mihai and Ciaramita, Massimiliano and Zaragoza, Hugo
Approach
For this reason we choose as a ranking algorithm the Perceptron which is both accurate and efficient and can be trained with online protocols.
Approach
Specifically, we implement the ranking Perceptron proposed by Shen and J oshi (2005), which reduces the ranking problem to a binary classification problem.
Approach
For regularization purposes, we use as a final model the average of all Perceptron models posited
perceptron is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: