Index of papers in Proc. ACL 2012 that mention
  • reranking
Constant, Matthieu and Sigogne, Anthony and Watrin, Patrick
Abstract
Secondly, integrating multiword expressions in the parser grammar followed by a reranker specific to such expressions slightly improves all evaluation metrics.
Introduction
Our proposal is to evaluate two discriminative strategies in a real constituency parsing context: (a) pre-grouping MWE before parsing; this would be done with a state-of-the-art recognizer based on Conditional Random Fields; (b) parsing with a grammar including MWE identification and then reranking the output parses thanks to a Maximum Entropy model integrating MWE-dedicated features.
MWE-dedicated Features
In order to make these models comparable, we use two comparable sets of feature templates: one adapted to sequence labelling (CRF—based MWER) and the other one adapted to reranking (MaXEnt-based reranker ).
MWE-dedicated Features
The reranker templates are instantiated only for the nodes of the candidate parse tree, which are leaves dominated by a MWE node (i.e.
MWE-dedicated Features
o RERANKER : for each leaf (in position 77.)
Two strategies, two discriminative models
3.2 Reranking
Two strategies, two discriminative models
Discriminative reranking consists in reranking the n-best parses of a baseline parser with a discriminative model, hence integrating features associated with each node of the candidate parses.
Two strategies, two discriminative models
Formally, given a sentence 8, the reranker selects the best candidate parse p among a set of candidates P (s) with respect to a scoring function V9:
reranking is mentioned in 20 sentences in this paper.
Topics mentioned in this paper:
Konstas, Ioannis and Lapata, Mirella
Abstract
The hypergraph structure encodes exponentially many derivations, which we rerank discriminatively using local and global features.
Introduction
ia Discriminative Reranking
Introduction
The performance of this baseline system could be potentially further improved using discriminative reranking (Collins, 2000).
Introduction
Typically, this method first creates a list of n-best candidates from a generative model, and then reranks them with arbitrary features (both local and global) that are either not computable or intractable to compute within the
Problem Formulation
The hypergraph representation allows us to decompose the feature functions and compute them piecemeal at each hyperarc (or sub-derivation), rather than at the root node as in conventional n-best list reranking .
Related Work
Discriminative reranking has been employed in many NLP tasks such as syntactic parsing (Char-niak and Johnson, 2005; Huang, 2008), machine translation (Shen et al., 2004; Li and Khudanpur, 2009) and semantic parsing (Ge and Mooney, 2006).
Related Work
Our model is closest to Huang (2008) who also performs forest reranking on a hypergraph, using both local and nonlocal features, whose weights are tuned with the averaged perceptron algorithm (Collins, 2002).
Related Work
We adapt forest reranking to generation and introduce several task-specific features that boost performance.
reranking is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Shindo, Hiroyuki and Miyao, Yusuke and Fujino, Akinori and Nagata, Masaaki
Abstract
Our SR-TSG parser achieves an F 1 score of 92.4% in the Wall Street Journal (WSJ) English Penn Treebank parsing task, which is a 7.7 point improvement over a conventional Bayesian TSG parser, and better than state-of-the-art discriminative reranking parsers.
Experiment
It should be noted that discriminative reranking parsers such as (Char-niak and Johnson, 2005) and (Huang, 2008) are constructed on a generative parser.
Experiment
The reranking parser takes the k-best lists of candidate trees or a packed forest produced by a baseline parser (usually a generative model), and then reranks the candidates using arbitrary features.
Experiment
Hence, we can expect that combining our SR-TSG model with a discriminative reranking parser would provide better performance than SR-TSG alone.
Introduction
Our SR-TSG parser achieves an F1 score of 92.4% in the WSJ English Penn Treebank parsing task, which is a 7.7 point improvement over a conventional Bayesian TSG parser, and superior to state-of-the-art discriminative reranking parsers.
reranking is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Pauls, Adam and Klein, Dan
Experiments
We report machine translation reranking results in Section 5.4.
Experiments
The latter report results for two binary classifiers: RERANK uses the reranking features of Charniak and Johnson (2005), and TSG uses
Experiments
All generative models improve, but TREELET-RULE remains the best, now outperforming the RERANK system, though of course it is likely that RERANK would improve if it could be scaled up to more training data.
Introduction
We also show fluency improvements in a preliminary machine translation reranking experiment.
reranking is mentioned in 5 sentences in this paper.
Topics mentioned in this paper: