Index of papers in Proc. ACL 2014 that mention
  • sentence compression
Thadani, Kapil
Abstract
Sentence compression has been shown to benefit from joint inference involving both n-gram and dependency-factored objectives but this typically requires expensive integer programming.
Abstract
While dynamic programming is viable for bigram-based sentence compression , finding optimal compressed trees within graphs is NP-hard.
Experiments
Following evaluations in machine translation as well as previous work in sentence compression (Unno et al., 2006; Clarke and Lapata, 2008; Martins and Smith, 2009; Napoles et al., 2011b; Thadani and McKeown, 2013), we evaluate system performance using F1 metrics over n-grams and dependency edges produced by parsing system output with RASP (Briscoe et al., 2006) and the Stanford parser.
Introduction
Sentence compression is a text-to-text generation task in which an input sentence must be transformed into a shorter output sentence which accurately reflects the meaning in the input and also remains grammatically well-formed.
Introduction
Following an assumption often used in compression systems, the compressed output in this corpus is constructed by dropping tokens from the input sentence without any paraphrasing or reordering.1 A number of diverse approaches have been proposed for deletion-based sentence compression , including techniques that assemble the output text under an n-gram factorization over the input text (McDonald, 2006; Clarke and Lapata, 2008) or an arc factorization over input dependency parses (Filippova and Strube, 2008; Galanis and Androutsopoulos, 2010; Filippova and Altun, 2013).
Introduction
Our proposed approximation strategies are evaluated using automated metrics in order to address the question: under what conditions should a real-world sentence compression system implementation consider exact inference with an ILP or approximate inference?
Related Work
Sentence compression is one of the better-studied text-to-text generation problems and has been observed to play a significant role in human summarization (Jing, 2000; Jing and McKeown, 2000).
Related Work
Most approaches to sentence compression are supervised (Knight and Marcu, 2002; Riezler et al., 2003; Turner and Charniak, 2005; McDonald, 2006; Unno et al., 2006; Galley and McKeown, 2007; Nomoto, 2007; Cohn and Lapata, 2009; Galanis and Androutsopoulos, 2010; Gan-itkevitch et al., 2011; Napoles et al., 2011a; Filippova and Altun, 2013) following the release of datasets such as the Ziff-Davis corpus (Knight and Marcu, 2000) and the Edinburgh compression corpora (Clarke and Lapata, 2006; Clarke and Lapata, 2008), although unsupervised approaches—largely based on ILPs—have also received consideration (Clarke and Lapata, 2007; Clarke and Lapata, 2008; Filippova and Strube, 2008).
sentence compression is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Kikuchi, Yuta and Hirao, Tsutomu and Takamura, Hiroya and Okumura, Manabu and Nagata, Masaaki
Abstract
Many methods of text summarization combining sentence selection and sentence compression have recently been proposed.
Conclusion
Hence, utilizing these for sentence compression has been left for future work.
Experiment
However, introducing sentence compression to the system greatly improved the ROUGE score (0.354).
Introduction
There has recently been increasing attention focused on approaches that jointly optimize sentence extraction and sentence compression (Tomita et al., 2009;
Related work
Extracting a subtree from the dependency tree of words is one approach to sentence compression (Tomita et al., 2009; Qian and Liu, 2013; Morita et al., 2013; Gillick and Favre, 2009).
Related work
The method of Filippova and Strube (2008) allows the model to extract non-rooted subtrees in sentence compression tasks that compress a single sentence with a given compression ratio.
sentence compression is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Pighin, Daniele and Cornolti, Marco and Alfonseca, Enrique and Filippova, Katja
Introduction
The first, compression-based method uses a robust sentence compressor with an aggressive compression rate to get to the core of the sentence (Sec.
Introduction
To the best of our knowledge, this is the first time that this task has been proposed; it can be considered as abstractive sentence compression, in contrast to most existing sentence compression systems which are based on selecting words from the original sentence or rewriting with simpler paraphrase tables.
Pattern extraction by sentence compression
Sentence compression is a summarization technique that shortens input sentences preserving the most important content (Grefenstette, 1998; McDonald, 2006; Clarke and Lapata, 2008, inter alia).
Pattern extraction by sentence compression
To our knowledge, this application of sentence compressors is novel.
Pattern extraction by sentence compression
Sentence compression methods are abundant but very few can be configured to produce output satisfying certain constraints.
sentence compression is mentioned in 6 sentences in this paper.
Topics mentioned in this paper: