Experimental Setup | solved an ILP for each document. |
Experimental Setup | The ILP model (see Equation (1)) was parametrized as follows: the maximum number of highlights NS was 4, the overall limit on length LT was 75 tokens, the length of each highlight was in the range of [8, 28] tokens, and the topic coverage set ‘T contained the top 5 tf.idf words. |
Experimental Setup | These parameters were chosen to capture the properties seen in the majority of the training set; they were also relaxed enough to allow a feasible solution of the ILP model (with hard constraints) for all the documents in the test set. |
Introduction | We encode these constraints through the use of integer linear programming ( ILP ), a well-studied optimization framework that is able to search the entire solution space efficiently. |
Modeling | Our approach therefore uses an ILP formulation which will provide a globally optimal solution, and which can be efficiently solved using standard optimization tools. |
Modeling | These edges are important to our formulation, as they will be represented by binary decision variables in the ILP . |
Modeling | ILP model The merged phrase structure tree, such as shown in Figure 2(b), is the actual input to our model. |
Related work | Martins and Smith (2009) formulate a joint sentence extraction and summarization model as an ILP . |
Related work | Headline generation models typically extract individual words from a document to produce a very short summary, whereas we extract phrases and ensure that they are combined into grammatical sentences through our ILP constraints. |
Background | variables are integers, the problem is termed an Integer Linear Program ( ILP ). |
Experimental Evaluation | Global algorithms We experimented with all 6 combinations of the following two dimensions: (1) Target functions: score-based, probabilistic and Snow et al.’s (2) Optimization algorithms: Snow et al.’s greedy algorithm and a standard ILP solver. |
Experimental Evaluation | This is the type of global consideration that is addressed in an ILP formulation, but is ignored in a local approach and often overlooked when employing a greedy algorithm. |
Experimental Evaluation | Comparing our use of an ILP algorithm to the greedy one reveals that tuned-LP significantly outperforms its greedy counterpart on both measures (p< .01). |
Introduction | The optimization problem is formulated as an Integer Linear Program (ILP) and solved with an ILP solver. |
Introduction | We show that this leads to an optimal solution with respect to the global function, and demonstrate that the algorithm outperforms methods that utilize only local information by more than 10%, as well as methods that employ a greedy optimization algorithm rather than an ILP solver (Section 6). |
Learning Entailment Graph Edges | Since the variables are binary, both formulations are integer linear programs with O(|V|2) variables and O(|V|3) transitivity constraints that can be solved using standard ILP packages. |