Budgeted Submodular Maximization with Cost Function | These requirements enable us to represent sentence compression as the extraction of subtrees from a sentence. |
Experimental Settings | Since KNP internally has a flag that indicates either an “obligatory case” or an “adj acent case”, we regarded dependency relations flagged by KNP as obligatory in the sentence compression . |
Introduction | Text summarization is often addressed as a task of simultaneously performing sentence extraction and sentence compression (Berg-Kirkpatrick et al., 2011; Martins and Smith, 2009). |
Joint Model of Extraction and Compression | We will formalize the unified task of sentence compression and extraction as a budgeted monotone nondecreasing submodular function maximization with a cost function. |
Joint Model of Extraction and Compression | In this paper, we address the task of summarization of Japanese text by means of sentence compression and extraction. |
Joint Model of Extraction and Compression | Therefore, sentence compression can be represented as edge pruning. |
Related Work | (2011) formulated a unified task of sentence extraction and sentence compression as an ILP. |
Abstract | We consider the problem of using sentence compression techniques to facilitate query-focused multi-document summarization. |
Introduction | Sentence compression techniques (Knight and Marcu, 2000; Clarke and Lapata, 2008) are the standard for producing a compact and grammatical version of a sentence while preserving relevance, and prior research (e.g. |
Introduction | Similarly, strides have been made to incorporate sentence compression into query-focused MDS systems (Zajic et al., 2006). |
Introduction | Most attempts, however, fail to produce better results than those of the best systems built on pure extraction-based approaches that use no sentence compression . |
Related Work | Our work is more related to the less studied area of sentence compression as applied to (single) document summarization. |
The Framework | We now present our query-focused MDS framework consisting of three steps: Sentence Ranking, Sentence Compression and Postprocessing. |
Abstract | In addition, we propose a multitask learning framework to take advantage of existing data for extractive summarization and sentence compression . |
Compressive Summarization | similar manner as described in §2, but with an additional component for the sentence compressor , and slight modifications in the other components. |
Compressive Summarization | In addition, we included hard constraints to prevent the deletion of certain arcs, following previous work in sentence compression (Clarke and Lapata, 2008). |
Experiments | (2011), but we augmented the training data with extractive summarization and sentence compression datasets, to help train the |
Experiments | For sentence compression , we adapted the Simple English Wikipedia dataset of Woodsend and Lapata (2011), containing aligned sentences for 15,000 articles from the English and Simple English Wikipedias. |
Extractive Summarization | However, extending these models to allow for sentence compression (as will be detailed in §3) breaks the diminishing returns property, making submodular optimization no longer applicable. |
Introduction | For example, such solvers are unable to take advantage of efficient dynamic programming routines for sentence compression (McDonald, 2006). |
Introduction | 0 We propose multitask learning (§4) as a principled way to train compressive summarizers, using auxiliary data for extractive summarization and sentence compression . |
MultiTask Learning | The goal is to take advantage of existing data for related tasks, such as extractive summarization (task #2), and sentence compression (task #3). |
MultiTask Learning | 0 For the sentence compression task, the parts correspond to arc-deletion features only. |
Abstract | We address this challenge with contributions in two folds: first, we introduce the new task of image caption generalization, formulated as visually-guided sentence compression , and present an efficient algorithm based on dynamic beam search with dependency-based constraints. |
Related Work | In comparison to prior work on sentence compression , our approach falls somewhere between unsupervised to distant-supervised approach (e. g., Turner and Charniak (2005), Filippova and Strube (2008)) in that there is not an in-domain training corpus to learn generalization patterns directly. |
Sentence Generalization as Constraint Optimization | Casting the generalization task as visually-guided sentence compression with lightweight revisions, we formulate a constraint optimization problem that aims to maximize content selection and local linguistic fluency while satisfying constraints driven from dependency parse trees. |