Index of papers in Proc. ACL 2013 that mention
  • ILP
Almeida, Miguel and Martins, Andre
Compressive Summarization
4 was converted into an ILP and fed to an off-the-she1f solver (Martins and Smith, 2009; Berg-Kirkpatrick et al., 2011; Woodsend and Lapata, 2012).
Experiments
All these systems require ILP solvers.
Experiments
We conducted another set of experiments to compare the runtime of our compressive summarizer based on AD3 with the runtimes achieved by GLPK, the ILP solver used by Berg-Kirkpatrick et al.
Experiments
ROUGE-2 ILP Exact 10.394 12.40 LP-Relax.
Extractive Summarization
This can be converted into an ILP and addressed with off-the-shelf solvers (Gillick et al., 2008).
Extractive Summarization
A drawback of this approach is that solving an ILP exactly is NP-hard.
Introduction
All approaches above are based on integer linear programming ( ILP ), suffering from slow runtimes, when compared to extractive systems.
Introduction
A second inconvenience of ILP-based approaches is that they do not exploit the modularity of the problem, since the declarative specification required by ILP solvers discards important structural information.
ILP is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Feng, Song and Kang, Jun Seok and Kuznetsova, Polina and Choi, Yejin
Connotation Induction Algorithms
Addressing limitations of graph-based algorithms (§2.2), we propose an induction algorithm based on Integer Linear Programming ( ILP ).
Connotation Induction Algorithms
We formulate insights in Figure 2 using ILP as follows:
Experimental Result I
Note that a direct comparison against ILP for top N words is tricky, as ILP does not rank results.
Experimental Result I
ranks based on the frequency of words for ILP .
Experimental Result I
Because of this issue, the performance of top le words of ILP should be considered only as a conservative measure.
Precision, Coverage, and Efficiency
Efficiency One practical problem with ILP is efficiency and scalability.
Precision, Coverage, and Efficiency
In particular, we found that it becomes nearly impractical to run the ILP formulation including all words in WordNet plus all words in the argument position in Google Web IT.
Precision, Coverage, and Efficiency
Interpretation Unlike ILP , some of the variables result in fractional values.
ILP is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Kundu, Gourab and Srikumar, Vivek and Roth, Dan
Decomposed Amortized Inference
problem cannot be solved using the procedure, then we can either solve the subproblem using a different approach (effectively giving us the standard Lagrangian relaxation algorithm for inference), or we can treat the full instance as a cache miss and make a call to an ILP solver.
Experiments and Results
We used a database engine to cache ILP and their solutions along with identifiers for the equivalence class and the value of 6.
Experiments and Results
For the margin-based algorithm and the Theorem 1 from (Srikumar et al., 2012), for a new inference problem p N [P], we retrieve all inference problems from the database that belong to the same equivalence class [P] as the test problem p and find the cached assignment y that has the highest score according to the coefficients of p. We only consider cached ILPs whose solution is y for checking the conditions of the theorem.
Experiments and Results
We compare our approach to a state-of-the-art ILP solver2 and also to Theorem 1 from (Srikumar et al., 2012).
Introduction
In these problems, the inference problem has been framed as an integer linear program ( ILP ).
Margin-based Amortization
If no such problem exists, then we make a call to an ILP solver.
Problem Definition and Notation
The language of 0-1 integer linear programs ( ILP ) provides a convenient analytical tool for representing structured prediction problems.
Problem Definition and Notation
One approach to deal with the computational complexity of inference is to use an off-the-shelf ILP solver for solving the inference problem.
Problem Definition and Notation
Let the set P 2 {p1, p2, - - - } denote previously solved inference problems, along with their respective solutions {yllm yfj, - - - An equivalence class of integer linear programs, denoted by [P], consists of ILPs which have the same number of inference variables and the same feasible set.
ILP is mentioned in 18 sentences in this paper.
Topics mentioned in this paper:
Li, Chen and Qian, Xian and Liu, Yang
Abstract
In this paper, we propose a bigram based supervised method for extractive document summarization in the integer linear programming ( ILP ) framework.
Abstract
During testing, the sentence selection problem is formulated as an ILP problem to maximize the bigram gains.
Abstract
We demonstrate that our system consistently outperforms the previous ILP method on different TAC data sets, and performs competitively compared to the best results in the TAC evaluations.
Introduction
Many methods have been developed for this problem, including supervised approaches that use classifiers to predict summary sentences, graph based approaches to rank the sentences, and recent global optimization methods such as integer linear programming ( ILP ) and submodular methods.
Introduction
Gillick and Favre (Gillick and Favre, 2009) introduced the concept-based ILP for summariza-
Introduction
This ILP method is formally represented as below (see (Gillick and Favre, 2009) for more details):
ILP is mentioned in 67 sentences in this paper.
Topics mentioned in this paper:
Nakashole, Ndapandula and Tylenda, Tomasz and Weikum, Gerhard
Candidate Types for Entities
Our solution is formalized as an Integer Linear Program ( ILP ).
Candidate Types for Entities
In the following we develop two variants of this approach: a “hard” ILP with rigorous disj ointness constraints, and a “soft” ILP which considers type correlations.
Candidate Types for Entities
“Hard” ILP with Type Disjointness Constraints.
ILP is mentioned in 18 sentences in this paper.
Topics mentioned in this paper:
Wu, Yuanbin and Ng, Hwee Tou
Abstract
We use integer linear programming ( ILP ) to model the inference process, which can easily incorporate both the power of existing error classifiers and prior knowledge on grammatical error correction.
Inference with First Order Variables
The inference problem for grammatical error correction can be stated as follows: “Given an input sentence, choose a set of corrections which results in the best output sentence.” In this paper, this problem will be expressed and solved by integer linear programming ( ILP ).
Introduction
ear programming ( ILP ).
Introduction
Variables of ILP are indicators of possible grammatical error corrections, the objective function aims to select the best set of corrections, and the constraints help to enforce a valid and grammatical output.
Introduction
Furthermore, ILP not only provides a method to solve the inference problem, but also allows for a natural integration of grammatical constraints into a machine learning approach.
Related Work
The difference between their work and our ILP approach is that the beam-search decoder returns an approximate solution to the original inference problem, while ILP returns an exact solution to an approximate inference problem.
ILP is mentioned in 33 sentences in this paper.
Topics mentioned in this paper:
Yang, Bishan and Cardie, Claire
Experiments
For joint inference, we used GLPK9 to provide the optimal ILP solution.
Introduction
(2006), which proposed an ILP approach to jointly identify opinion holders, opinion expressions and their IS-FROM linking relations, and demonstrated the effectiveness of joint inference.
Introduction
Their ILP formulation, however, does not handle implicit linking relations, i.e.
Model
Note that in our ILP formulation, the label assignment for a candidate span involves one multiple-choice decision among different opinion entity labels and the “NONE” entity label.
Model
This makes our ILP formulation advantageous over the ILP formulation proposed in Choi et al.
Related Work
(2006), which jointly extracts opinion expressions, holders and their IS-FROM relations using an ILP approach.
Related Work
In contrast, our approach (1) also considers the IS-AB OUT relation which is arguably more complex due to the larger variety in the syntactic structure exhibited by opinion expressions and their targets, (2) handles implicit opinion relations (opinion expressions without any associated argument), and (3) uses a simpler ILP formulation.
Results
To demonstrate the effectiveness of different potentials in our joint inference model, we consider three variants of our ILP formulation that omit some potentials in the joint inference: one is ILP-W/O-ENTITY, which extracts opinion relations without integrating information from opinion entity identification; one is ILP-W-SINGLE-RE, which focuses on extracting a single opinion relation and ignores the information from the other relation; the third one is ILP-W/O-IMPLICIT—RE, which omits the potential for opinion-implicit-arg relation and assumes every opinion expression is linked to an explicit argument.
Results
It can be viewed as an extension to the ILP approach in Choi et al.
Results
(2006) that includes opinion targets and uses simpler ILP formulation with only one parameter and fewer binary variables and constraints to represent entity label assignments 11.
ILP is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Cai, Shu and Knight, Kevin
Computing the Metric
ILP method.
Computing the Metric
We can get an optimal solution using integer linear programming ( ILP ).
Computing the Metric
Finally, we ask the ILP solver to maximize:
Introduction
We investigate how to compute this metric and provide several practical and replicable computing methods by using Integer Linear Programming ( ILP ) and hill-climbing method.
Using Smatch
0 ILP : Integer Linear Programming
Using Smatch
Each individual smatch score is a document-level score of 4 AMR pairs.3 ILP scores are optimal, so lower scores (in bold) indicate search errors.
Using Smatch
Table 2 summarizes search accuracy as a percentage of smatch scores that equal that of ILP .
ILP is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Morita, Hajime and Sasano, Ryohei and Takamura, Hiroya and Okumura, Manabu
Joint Model of Extraction and Compression
An optimization problem with this objective function cannot be regarded as an ILP problem because it contains nonlinear terms.
Related Work
Integer linear programming ( ILP ) formulations can represent such flexible constraints, and they are commonly used to model text summarization (McDonald, 2007).
Related Work
(2011) formulated a unified task of sentence extraction and sentence compression as an ILP .
Related Work
However, it is hard to solve large-scale ILP problems exactly in a practical amount of time.
ILP is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Goldwasser, Dan and Roth, Dan
Semantic Interpretation Model
We follow (Goldwasser et al., 2011; Clarke et al., 2010) and formalize semantic inference as an Integer Linear Program ( ILP ).
Semantic Interpretation Model
We then proceed to augment this model with domain-independent information, and connect the two models by constraining the ILP model.
Semantic Interpretation Model
We take advantage of the flexible ILP framework and encode these restrictions as global constraints.
ILP is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Li, Peifeng and Zhu, Qiaoming and Zhou, Guodong
Experimentation
To achieve an optimal solution, we formulate the global inference problem as an Integer Linear Program ( ILP ), which leads to maximize the objective function.
Experimentation
ILP is a mathematical method for constraint-based inference to find the optimal values for a set of variables that maximize an objective function in satisfying a certain number of constraints.
Experimentation
In the literature, ILP has been widely used in many NLP applications (e.g., Barzilay and Lapata, 2006; Do et al., 2012; Li et al., 2012b).
ILP is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: