Index of papers in Proc. ACL 2009 that mention
  • sentence compression
Hirao, Tsutomu and Suzuki, Jun and Isozaki, Hideki
A Syntax Free Sequence-oriented Sentence Compression Method
As an alternative to syntactic parsing, we propose two novel features, intra-sentence positional term weighting (IPTW) and the patched language model (PLM) for our syntax-free sentence compressor .
A Syntax Free Sequence-oriented Sentence Compression Method
3.1 Sentence Compression as a Combinatorial Optimization Problem
Abstract
Conventional sentence compression methods employ a syntactic parser to compress a sentence without changing its meaning.
Abstract
Moreover, for the goal of on-demand sentence compression , the time spent in the parsing stage is not negligible.
Analysis of reference compressions
This statistic supports the view that sentence compression that strongly depends on syntax is not useful in reproducing reference compressions.
Analysis of reference compressions
We need a sentence compression method that can drop intermediate nodes in the syntactic tree aggressively beyond the tree-scoped boundary.
Analysis of reference compressions
In addition, sentence compression methods that strongly depend on syntactic parsers have two problems: ‘parse error’ and ‘decoding speed.’ 44% of sentences output by a state-of-the-art Japanese dependency parser contain at least one error (Kudo and Matsumoto, 2005).
Introduction
In accordance with this idea, conventional sentence compression methods employ syntactic parsers.
Introduction
Moreover, on-demand sentence compression is made problematic by the time spent in the parsing stage.
Introduction
This paper proposes a syntax-free sequence-oriented sentence compression method.
sentence compression is mentioned in 25 sentences in this paper.
Topics mentioned in this paper:
Zhao, Shiqi and Lan, Xiang and Liu, Ting and Li, Sheng
Introduction
We consider three paraphrase applications in our experiments, including sentence compression , sentence simplification, and sentence similarity computation.
Results and Analysis
Results show that the percentages of test sentences that can be paraphrased are 97.2%, 95.4%, and 56.8% for the applications of sentence compression , simplification, and similarity computation, respectively.
Results and Analysis
Further results show that the average number of unit replacements in each sentence is 5.36, 4.47, and 1.87 for sentence compression , simplification, and similarity computation.
Results and Analysis
A source sentence s is paraphrased in each application and we can see that: (l) for sentence compression , the paraphrase t is 8 bytes shorter than s; (2) for sentence simplification, the words wealth and part in t are easier than their sources asset and proportion, especially for the nonnative speakers; (3) for sentence similarity computation, the reference sentence s’ is listed below t, in which the words appearing in t but not in s are highlighted in blue.
Statistical Paraphrase Generation
On the contrary, SPG has distinct purposes in different applications, such as sentence compression , sentence simplification, etc.
Statistical Paraphrase Generation
The application in this example is sentence compression .
Statistical Paraphrase Generation
Paraphrase application: sentence compression
sentence compression is mentioned in 14 sentences in this paper.
Topics mentioned in this paper: