Index of papers in Proc. ACL 2013 that mention
  • reranking
Ferret, Olivier
Experiments and evaluation
Figure 1 gives the results of the reranked thesaurus for these entries in terms of R-precision and MAP against reference W5 for various values of G. Although the level of these measures does not change a lot for G > 5, the graph of Figure 1 shows that G = 15 appears to be an optimal value.
Experiments and evaluation
4.3 Evaluation of the reranked thesaurus
Experiments and evaluation
Table 4 gives the evaluation of the application of our reranking method to the initial thesaurus according to the same principles as in section 4.1.
Improving a distributional thesaurus
o reranking of entry’s neighbors according to bad neighbors.
Improving a distributional thesaurus
As mentioned in section 3.1, the starting point of our reranking process is the definition of a model for determining to what extent a word in a sentence, which is not supposed to be known in the context of this task, corresponds or not to a reference word E. This task can also be viewed as a tagging task in which the occurrences of a target word T are labeled with two tags: E and notE.
Improving a distributional thesaurus
3.4 Identification of bad neighbors and thesaurus reranking
reranking is mentioned in 17 sentences in this paper.
Topics mentioned in this paper:
Kim, Joohyun and Mooney, Raymond
Abstract
We adapt discriminative reranking to improve the performance of grounded language acquisition, specifically the task of learning to follow navigation instructions from observation.
Abstract
Unlike conventional reranking used in syntactic and semantic parsing, gold-standard reference trees are not naturally available in a grounded setting.
Background
The baseline generative model we use for reranking employs the unsupervised PCFG induction approach introduced by Kim and Mooney (2012).
Introduction
Since their system employs a generative model, discriminative reranking (Collins, 2000) could p0-tentially improve its performance.
Introduction
By training a discriminative classifier that uses global features of complete parses to identify correct interpretations, a reranker can significantly improve the accuracy of a generative model.
Introduction
Reranking has been successfully employed to improve syntactic parsing (Collins, 2002b), semantic parsing (Lu et al., 2008; Ge and Mooney, 2006), semantic role labeling (Toutanova et al., 2005), and named entity recognition (Collins, 2002c).
reranking is mentioned in 46 sentences in this paper.
Topics mentioned in this paper:
Tomeh, Nadi and Habash, Nizar and Roth, Ryan and Farra, Noura and Dasigi, Pradeep and Diab, Mona
Abstract
To do so we follow an n-best list reranking approach that exploits recent advances in learning to rank techniques.
Discriminative Reranking for OCR
2.2 Ensemble reranking
Discriminative Reranking for OCR
In addition to the above mentioned approaches, we couple simple feature selection and reranking models combination via a straightforward ensemble learning method similar to stacked generalization (Wolpert, 1992) and Combiner (Chan and Stolfo, 1993).
Discriminative Reranking for OCR
These features are used by the baseline system5 as well as by the various reranking methods.
Experiments
Table 2 presents the WER for our baseline hypothesis, the best hypothesis in the list (our oracle) and our best reranking results which we describe in details in §3.2.
Experiments
on the reranking performance for one of our best reranking models, namely RankSVM.
Experiments
3.2 Reranking results
Introduction
A straightforward alternative which we advocate in this paper is to use the available information to rerank the hypotheses in the n-best lists.
Introduction
Discriminative reranking allows each hypothesis to be represented as an arbitrary set of features without the need to explicitly model their interactions.
Introduction
We describe our features and reranking approach in §2, and we present our experiments and results in §3.
reranking is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Liu, Yang
Introduction
Therefore, we use hypergraph reranking (Huang and Chiang, 2007; Huang, 2008), which proves to be effective for integrating nonlocal features into dynamic programming, to alleviate this problem.
Introduction
3 In the second pass, we use the hypergraph reranking algorithm (Huang, 2008) to find promising translations using additional dependency features (i.e., features 8-10 in the list).
Introduction
Table 3 shows the effect of hypergraph reranking .
reranking is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Feng, Minwei and Peter, Jan-Thorsten and Ney, Hermann
Introduction
In the reranking framework: in principle, all
Introduction
the models in previous category can be used in the reranking framework, because in the reranking we have all the information (source and target words/phrases, alignment) about the translation process.
Introduction
One disadvantage of carrying out reordering in reranking is the representativeness of the N-best list is often a question mark.
reranking is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: