Index of papers in Proc. ACL 2014 that mention
  • reranking
Duan, Manjuan and White, Michael
Abstract
Using parse accuracy in a simple reranking strategy for self-monitoring, we find that with a state-of-the-art averaged perceptron realization ranking model, BLEU scores cannot be improved with any of the well-known Treebank parsers we tested, since these parsers too often make errors that human readers would be unlikely to make.
Abstract
Moreover, via a targeted manual analysis, we demonstrate that the SVM reranker frequently manages to avoid vicious ambiguities, while its ranking errors tend to affect fluency much more often than adequacy.
Introduction
To do so—in a nutshell—we enumerate an n-best list of realizations and rerank them if necessary to avoid vicious ambiguities, as determined by one or more automatic parsers.
Introduction
Consequently, we examine two reranking strategies, one a simple baseline approach and the other using an SVM reranker (J oachims, 2002).
Introduction
Our simple reranking strategy for self-monitoring is to rerank the realizer’s n-best list by parse accuracy, preserving the original order in case of ties.
reranking is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Jansen, Peter and Surdeanu, Mihai and Clark, Peter
Abstract
We propose a robust answer reranking model for non-factoid questions that integrates lexical semantics with discourse information, driven by two representations of discourse: a shallow representation centered around discourse markers, and a deep one based on Rhetorical Structure Theory.
Approach
The proposed answer reranking component is embedded in the QA framework illustrated in Figure 1.
Approach
CQA: In this scenario, the task is defined as reranking all the user-posted answers for a particular question to boost the community-selected best answer to the top position.
Approach
These answer candidates are then passed to the answer reranking component, the focus of this work.
Introduction
We propose a novel answer reranking (AR) model that combines lexical semantics (LS) with discourse information, driven by two representations of discourse: a shallow representation centered around discourse markers and surface text information, and a deep one based on the Rhetorical Structure Theory (RST) discourse framework (Mann and Thompson, 1988).
Related Work
First, most NF QA approaches tend to use multiple similarity models (information retrieval or alignment) as features in discriminative rerankers (Riezler et al., 2007; Higashinaka and Isozaki, 2008; Verberne et al., 2010; Surdeanu et al., 2011).
Related Work
(2011) extracted 47 cue phrases such as because from a small collection of web documents, and used the cosine similarity between an answer candidate and a bag of words containing these cue phrases as a single feature in their reranking model for non-factoid why QA.
Related Work
This classifier was then used to extract instances of causal relations in answer candidates, which were turned into features in a reranking model for J apanense why QA.
reranking is mentioned in 24 sentences in this paper.
Topics mentioned in this paper:
Li, Hao and Liu, Wei and Ji, Heng
Abstract
LSH accounts for neighbor candidate pruning, while ITQ provides an efficient and effective reranking over the neighbor pool captured by LSH.
Document Retrieval with Hashing
In this section, we first provide an overview of applying hashing techniques to a document retrieval task, and then introduce two unsupervised hashing algorithms: LSH acts as a neighbor-candidate filter, while ITQ works towards precise reranking over the candidate pool returned by LSH.
Document Retrieval with Hashing
Hamming Reranking
Document Retrieval with Hashing
In this framework, LSH accounts for neighbor candidate pruning, while ITQ provides an efficient and effective reranking over the neighbor pool captured by LSH.
Experiments
Another crucial observation is that with ITQ reranking , a small number of LSH hash tables is needed in the pruning step.
Experiments
Since the LSH pruning time can be ignored, the search time of the two-stage hashing scheme equals to the time of hamming distance reranking in ITQ codes for all candidates produced from LSH pruning step, e.g., LSH(48bits, 4 tables) +
Experiments
2 (f) shows the ITQ data reranking percentage for different LSH bit lengths and table numbers.
reranking is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Rasooli, Mohammad Sadegh and Lippincott, Thomas and Habash, Nizar and Rambow, Owen
Morphology-based Vocabulary Expansion
Reranking Models Given that the size of the expanded vocabulary can be quite large and it may include a lot of over-generation, we rerank the expanded set of words before taking the top n words to use in downstream processes.
Morphology-based Vocabulary Expansion
We consider four reranking conditions which we describe below.
Morphology-based Vocabulary Expansion
Reranked Expansion
reranking is mentioned in 18 sentences in this paper.
Topics mentioned in this paper:
Zhang, Yuan and Lei, Tao and Barzilay, Regina and Jaakkola, Tommi and Globerson, Amir
Experimental Setup
We also compare our model against a discriminative reranker .
Experimental Setup
The reranker operates over the
Experimental Setup
We then train the reranker by running 10 epochs of cost-augmented MIRA.
Features
Global Features We used feature shown promising in prior reranking work Chamiak and Johnson (2005), Collins (2000) and Huang (2008).
Introduction
They first appeared in the context of reranking (Collins, 2000), where a simple parser is used to generate a candidate list which is then reranked according to the scoring function.
Introduction
Our method provides a more effective mechanism for handling global features than reranking , outperforming it by 1.3%.
Related Work
The first successful approach in this arena was reranking (Collins, 2000; Charniak and J ohn-son, 2005) on constituency parsing.
Related Work
Reranking can be combined with an arbitrary scoring function, and thus can easily incorporate global features over the entire parse tree.
Related Work
Its main disadvantage is that the output parse can only be one of the few parses passed to the reranker .
Results
4The MST parser is trained in projective mode for reranking because generating top-k list from second-order non-projective model is intractable.
reranking is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Wang, Zhiguo and Xue, Nianwen
Experiment
We group parsing systems into three categories: single systems, reranking systems and semi-supervised systems.
Experiment
Our N0nlocal&Cluster system further improved the parsing F1 to 86.3%, and it outperforms all reranking systems and semi-supervised systems.
Experiment
*Huang (2009) adapted the parse reranker to CTB5.
Joint POS Tagging and Parsing with Nonlocal Features
But almost all previous work considered nonlocal features only in parse reranking frameworks.
Related Work
However, almost all of the previous work use nonlocal features at the parse reranking stage.
reranking is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Cao, Yuan and Khudanpur, Sanjeev
Experiments
We used the Charniak parser (Charniak et al., 2005) for our experiment, and we used the proposed algorithm to train the reranking feature weights.
Experiments
For comparison, we also investigated training the reranker with Perceptron and MIRA.
Experiments
There are around V = 1.33 million features in all defined for reranking, and the n-best size for reranking is set to 50.
reranking is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Hall, David and Durrett, Greg and Klein, Dan
Introduction
There have been nonlocal approaches as well, such as tree-substitution parsers (Bod, 1993; Sima’an, 2000), neural net parsers (Henderson, 2003), and rerankers (Collins and Koo, 2005; Charniak and Johnson, 2005; Huang, 2008).
Other Languages
it does not use a reranking step or post-hoc combination of parser results.
Other Languages
5 Their best parser, and the best overall parser from the shared task, is a reranked product of “Replaced” Berkeley parsers.
reranking is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: