Index of papers in Proc. ACL 2010 that mention
  • sentence compression
Yamangil, Elif and Shieber, Stuart M.
Abstract
We describe our experiments with training algorithms for tree-to-tree synchronous tree-substitution grammar (STSG) for monolingual translation tasks such as sentence compression and paraphrasing.
Abstract
We formalize nonparametric Bayesian STSG with epsilon alignment in full generality, and provide a Gibbs sampling algorithm for posterior inference tailored to the task of extractive sentence compression .
Introduction
Such induction of tree mappings has application in a variety of natural-language-processing tasks including machine translation, paraphrase, and sentence compression .
Introduction
In this work, we explore techniques for inducing synchronous tree-substitution grammars (STSG) using as a testbed application extractive sentence compression .
Introduction
In this work, we use an extension of the aforementioned models of generative segmentation for STSG induction, and describe an algorithm for posterior inference under this model that is tailored to the task of extractive sentence compression .
Sentence compression
Sentence compression is the task of summarizing a sentence while retaining most of the informational content and remaining grammatical (Jing, 2000).
Sentence compression
In extractive sentence compression , which we focus on in this paper, an order-preserving subset of the words in the sentence are selected to form the summary, that is, we summarize by deleting words (Knight and Marcu, 2002).
Sentence compression
In supervised sentence compression , the goal is to generalize from a parallel training corpus of sentences (source) and their compressions (target) to unseen sentences in a test set to predict their compressions.
The STSG Model
Our sampling updates are extensions of those used by Cohn and Blunsom (2009) in MT, but are tailored to our task of extractive sentence compression .
sentence compression is mentioned in 15 sentences in this paper.
Topics mentioned in this paper:
Woodsend, Kristian and Lapata, Mirella
Experimental Setup
There are no sentence length or grammaticality constraints, as there is no sentence compression .
Introduction
Sentence compression is often regarded as a promising first step towards ameliorating some of the problems associated with extractive summarization.
Introduction
Interfacing extractive summarization with a sentence compression module could improve the conciseness of the generated summaries and render them more informative (Jing, 2000; Lin, 2003; Zajic et al., 2007).
Introduction
Despite the bulk of work on sentence compression and summarization (see Clarke and Lapata 2008 and Mani 2001 for overviews) only a handful of approaches attempt to do both in a joint model (Daume III and Marcu, 2002; Daume III, 2006; Lin, 2003; Martins and Smith, 2009).
Related work
A few previous approaches have attempted to interface sentence compression with summarization.
Related work
The latter optimizes an objective function consisting of two parts: an extraction component, essentially a non-greedy variant of maximal marginal relevance (McDonald, 2007), and a sentence compression component, a more compact reformulation of Clarke and Lapata (2008) based on the output of a dependency parser.
Results
Furthermore, as a standalone sentence compression system it yields state of the art performance, comparable to McDonald’s (2006) discriminative model and superior to Hedge Trimmer (Zajic et al., 2007), a less sophisticated deterministic system.
sentence compression is mentioned in 8 sentences in this paper.
Topics mentioned in this paper: