Index of papers in Proc. ACL 2012 that mention
  • ILP
Berant, Jonathan and Dagan, Ido and Adler, Meni and Goldberger, Jacob
Background
proved that this optimization problem, which we term Max-Trans-Graph, is NP-hard, and so described it as an Integer Linear Program ( ILP ).
Background
Let 137;]- be a binary variable indicating the eXistence of an edge 73 —> j in E. Then, X = {mij : 73 7E j} are the variables of the following ILP for Max-Trans-Graph:
Background
Since ILP is NP-hard, applying an ILP solver directly does not scale well because the number of variables is O( | V |2) and the number of constraints is O( | V|3).
Forest-reducible Graphs
In these experiments we show that exactly solving Max-Trans-Graph and Max-Trans-Forest (with an ILP solver) results in nearly identical performance.
Forest-reducible Graphs
An ILP formulation for Max-Trans-Forest is simple — a transitive graph is an FRG if all nodes in its reduced graph have no more than one parent.
Forest-reducible Graphs
Therefore, the ILP is formulated by adding this linear constraint to ILP (l):
Introduction
Since finding the optimal set of edges respecting transitivity is NP-hard, they employed Integer Linear Programming ( ILP ) to find the exact solution.
Introduction
(Berant et al., 2011) introduced a more efficient exact algorithm, which decomposes the graph into connected components and then applies an ILP solver over each component.
Sequential Approximation Algorithms
This is dramatically more efficient and scalable than applying an ILP solver.
ILP is mentioned in 18 sentences in this paper.
Topics mentioned in this paper:
Kuznetsova, Polina and Ordonez, Vicente and Berg, Alexander and Berg, Tamara and Choi, Yejin
Introduction
We employ Integer Linear Programming ( ILP ) as an optimization framework that has been used successfully in other generation tasks (e.g., Clarke and Lapata (2006), Martins and Smith (2009), Woodsend and Lapata (2010)).
Introduction
Our ILP formulation encodes a rich set of linguistically motivated constraints and weights that incorporate multiple aspects of the generation process.
Introduction
For a query image, we first retrieve candidate descriptive phrases from a large image-caption database using measures of visual similarity We then generate a coherent description from these candidates using ILP formulations for content planning (§4) and surface realization
Overview of ILP Formulation
The ILP formulation of §4 addresses T1 & T2, i.e., content-planning, and the ILP of §5 addresses T3 & T4, i.e., surface realization.1
Overview of ILP Formulation
1It is possible to create one conjoined ILP formulation to address all four operations T1—T4 at once.
Surface Realization
This trick helps the ILP solver to generate sentences with varying number of phrases, rather than always selecting the maximum number of phrases allowed.
Surface Realization
Baselines: We compare our ILP approaches with two nontrivial baselines: the first is an HMM approach (comparable to Yang et al.
Surface Realization
HMM HMM ILP ILP cognitive phrases: with w/o with w/o | | 0.111 | 0.114 1 0.114 1 0.116 |
ILP is mentioned in 31 sentences in this paper.
Topics mentioned in this paper:
Zhao, Qiuye and Marcus, Mitch
Abstract
However, they are better applied to a word-based model, thus an integer linear programming ( ILP ) formulation is proposed.
Abstract
In recent work, interesting results are reported for applications of integer linear programming ( ILP ) such as semantic role labeling (SRL) (Roth and Yih, 2005), dependency parsing (Martins et al., 2009) and so on.
Abstract
In an ILP formulation, ’nonlocal’ deterministic constraints on output structures can be naturally incorporated, such as ”a verb cannot take two subject arguments” for SRL, and the projectiv-ity constraint for dependency parsing.
ILP is mentioned in 20 sentences in this paper.
Topics mentioned in this paper:
Raghavan, Sindhu and Mooney, Raymond and Ku, Hyeonseo
Introduction
tween the lines.” We present an experimental evaluation of our resulting system on a realistic test corpus from DARPA’s Machine Reading project, and demonstrate improved performance compared to a purely logical approach based on Inductive Logic Programming ( ILP ) (Lavrac and DZeroski, 1994), and an alternative SRL approach based on Markov Logic Networks (MLNs) (Domingos and Lowd, 2009).
Learning BLPs to Infer Implicit Facts
We then learn first-order rules from these extracted facts using LIME (Mc-creath and Sharma, 1998), an ILP system designed for noisy training data.
Learning BLPs to Infer Implicit Facts
Typically, an ILP system takes a set of positive and negative instances for a target relation, along with a background knowledge base (in our case, other facts extracted from the same document) from which the positive instances are potentially inferable.
Learning BLPs to Infer Implicit Facts
We initially tried using the popular ALEPH ILP system (Srinivasan, 2001), but it did not produce useful rules, probably due to the high level of noise in our training data.
Related Work
(2010) modify an ILP system similar to FOIL (Quinlan, 1990) to learn rules with probabilistic conclusions.
Related Work
(2010) use FARMER (Nijssen and Kok, 2003), an existing ILP system, to learn first-order rules.
Results and Discussion
However, in contrast to MLNs, BLPs that use first-order rules that are learned by an off-the-shelf ILP system and given simple intuitive hand-coded weights, are able to provide fairly high-precision inferences that augment the output of an IE system and allow it to effectively “read between the lines.”
ILP is mentioned in 7 sentences in this paper.
Topics mentioned in this paper: