Index of papers in Proc. ACL 2009 that mention
  • reranking
Cheung, Jackie Chi Kit and Penn, Gerald
Conclusion and Future Work
We further examined the results of doing a simple reranking process, constraining the output parse to put paired punctuation in the same clause.
Conclusion and Future Work
This reranking was found to result in a minor performance gain.
Experiments
4.4 Reranking for Paired Punctuation
Experiments
To rectify this problem, we performed a simple post-hoc reranking of the 50-best parses produced by the best parameter settings (+ Gold tags, - Edge labels), selecting the first parse that places paired punctuation in the same clause, or retum-ing the best parse if none of the 50 parses satisfy the constraint.
Experiments
Overall, 38 sentences were parsed with paired punctuation in different clauses, of which 16 were reranked .
Introduction
A further reranking of the parser output based on a constraint involving paired punctuation produces a slight additional performance gain.
reranking is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Niu, Zheng-Yu and Wang, Haifeng and Wu, Hua
Experiments of Parsing
We used Charniak’s maximum entropy inspired parser and their reranker (Charniak and Johnson, 2005) for target grammar parsing, called a generative parser (GP) and a reranking parser (RP) respectively.
Experiments of Parsing
Table 5: Results of the generative parser (GP) and the reranking parser (RP) on the test set, when trained on only CTB training set or an optimal combination of CTB training set and CDTPS .
Experiments of Parsing
Finally we evaluated two parsing models, the generative parser and the reranking parser, on the test set, with results shown in Table 5.
Introduction
When coupled with self-training technique, a reranking parser with CTB and converted CDT as labeled data achieves 85.2% f-score on CTB test set, an absolute 1.0% improvement (6% error reduction) over the previous best result for Chinese parsing.
reranking is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Owczarzak, Karolina
Abstract
Using a reranking parser and a Lexical-Functional Grammar (LFG) annotation, we produce a set of dependency triples for each summary.
Discussion and future work
Its core modules were updated as well: Minipar was replaced with the Charniak—Johnson reranking parser (Charniak and Johnson, 2005), Named Entity identification was added, and the BE extraction is conducted using a set of Tregex rules (Levy and Andrew, 2006).
Discussion and future work
Since our method, presented in this paper, also uses the reranking parser, as well as WordNet, it would be interesting to compare both methods directly in terms of the performance of the dependency extraction procedure.
Introduction
(2004) applied to the output of the reranking parser of Chamiak and Johnson (2005), whereas in BE (in the version presented here) dependencies are generated by the Minipar parser (Lin, 1995).
Lexical-Functional Grammar and the LFG parser
First, a summary is parsed with the Charniak—Johnson reranking parser (Chamiak and Johnson, 2005) to obtain the phrase-structure tree.
reranking is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Ge, Ruifang and Mooney, Raymond
Conclusion and Future work
Reranking could also potentially improve the results (Ge and Mooney, 2006; Lu et al., 2008).
Experimental Evaluation
available): SCISSOR (Ge and Mooney, 2005), an integrated syntactic-semantic parser; KRISP (Kate and Mooney, 2006), an SVM-based parser using string kernels; WASP (Wong and Mooney, 2006; Wong and Mooney, 2007), a system based on synchronous grammars; Z&C (Zettlemoyer and Collins, 2007)3, a probabilistic parser based on relaxed CCG grammars; and LU (Lu et a1., 2008), a generative model with discriminative reranking .
Experimental Evaluation
Note that some of these approaches require additional human supervision, knowledge, or engineered features that are unavailable to the other systems; namely, SCISSOR requires gold-standard SAPTs, Z&C requires hand-built template grammar rules, LU requires a reranking model using specially designed global features, and our approach requires an existing syntactic parser.
reranking is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: