Index of papers in Proc. ACL that mention
  • optimization problem
Anzaroot, Sam and Passos, Alexandre and Belanger, David and McCallum, Andrew
Abstract
Moreover, with a technique for performing inference given soft constraints, it is easy to automatically generate large families of constraints and learn their costs with a simple convex optimization problem during training.
Background
The MAP inference task in a CRF be can expressed as an optimization problem with a lin-
Background
Furthermore, a subgradient of D(A) is Ay* — b, for an y* which maximizes this inner optimization problem .
Soft Constraints in Dual Decomposition
Consider the optimization problems of the form: max.
Soft Constraints in Dual Decomposition
This optimization problem can still be solved with projected subgradient descent and is depicted in Algorithm 2.
Soft Constraints in Dual Decomposition
Each penalty Ci has to be nonnegative; otherwise, the optimization problem in equation (5) is ill-defined.
optimization problem is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Eidelman, Vladimir and Marton, Yuval and Resnik, Philip
Learning in SMT
The usual presentation of MIRA’s optimization problem is given as a quadratic program:
Learning in SMT
While solving the optimization problem relies on computing the margin between the correct output yi, and y’, in SMT our decoder is often incapable of producing the reference translation, i.e.
Learning in SMT
In this setting, the optimization problem becomes:
The Relative Margin Machine in SMT
The online latent structured soft relative margin optimization problem is then:
The Relative Margin Machine in SMT
The dual in Equation (5) can be optimized using a cutting plane algorithm, an effective method for solving a relaxed optimization problem in the dual, used in Structured SVM, MIRA, and RMM (Tsochantaridis et al., 2004; Chiang, 2012; Shivaswamy and Jebara, 2009a).
optimization problem is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Zhou, Guangyou and Liu, Fang and Liu, Yang and He, Shizhu and Zhao, Jun
Our Approach
By solving the optimization problem in equation (4), we can get the reduced representation of terms and questions.
Our Approach
,U p fixed, the update of Up amounts to the following optimization problem:
Our Approach
Thus, the optimization of equation (5) can be decomposed into Mp optimization problems that can be solved independently, with each corresponding to one row of Up:
optimization problem is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Cohen, Shay B. and Collins, Michael
Introduction
We show that once the matrix decomposition step has been applied, parameter estimation of the L—PCFG can be reduced to a convex optimization problem that is easily solved by EM.
The Learning Algorithm for L-PCFGS
Crucially, this is a convex optimization problem , and the EM algorithm will converge to the global maximum of this likelihood function.
The Learning Algorithm for L-PCFGS
Now consider the optimization problem in Eq.
The Learning Algorithm for L-PCFGS
al., 2012) gives one set of guarantees; the remaining optimization problems we solve are convex maximum-likelihood problems, which are also relatively easy to analyze.
The Matrix Decomposition Algorithm
This is again a convex optimization problem .
optimization problem is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Berant, Jonathan and Dagan, Ido and Goldberger, Jacob
Abstract
We define a graph structure over predicates that represents entailment relations as directed edges, and use a global transitivity constraint on the graph to learn the optimal set of edges, by formulating the optimization problem as an Integer Linear Program.
Background
A Linear Program (LP) is an optimization problem , where a linear function is minimized (or maximized) under linear constraints.
Conclusion
We used Integer Linear Programming to solve the optimization problem , which is theoretically sound, and demonstrated empirically that this method outperforms local algorithms as well as a greedy optimization algorithm on the graph learning task.
Introduction
The optimization problem is formulated as an Integer Linear Program (ILP) and solved with an ILP solver.
optimization problem is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Berant, Jonathan and Dagan, Ido and Goldberger, Jacob
Background
Last, we formulate our optimization problem as an Integer Linear Program (ILP).
Background
ILP is an optimization problem where a linear objective function over a set of integer variables is maximized under a set of linear constraints.
Learning Typed Entailment Graphs
Section 4.2 gives an ILP formulation for the optimization problem .
Learning Typed Entailment Graphs
The edges returned by the solver provide an optimal (not approximate) solution to the optimization problem .
optimization problem is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Almeida, Miguel and Martins, Andre
Compressive Summarization
In previous work, the optimization problem in Eq.
Extractive Summarization
By designing a quality score function g : {0, 1}N —> R, this can be cast as a global optimization problem with a knapsack constraint:
Extractive Summarization
1, one obtains the following Boolean optimization problem:
Introduction
els (maximizing relevance, and penalizing redundancy) lead to submodular optimization problems (Lin and Bilmes, 2010), which are NP-hard but ap-proximable through greedy algorithms; learning is possible with standard structured prediction algorithms (Sipos et al., 2012; Lin and Bilmes, 2012).
optimization problem is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhu, Jun and Zheng, Xun and Zhang, Bo
Introduction
Technically, instead of doing standard Bayesian inference via Bayes’ rule, which requires a normalized likelihood model, we propose to do regularized Bayesian inference (Zhu et al., 2011; Zhu et al., 2013b) via solving an optimization problem , where the posterior regularization is defined as an expectation of a logistic loss, a surrogate loss of the expected misclassification error; and a regularization parameter is introduced to balance the surrogate classification loss (i.e., the response log-likelihood) and the word likelihood.
Introduction
2 introduces logistic supervised topic models as a general optimization problem .
Logistic Supervised Topic Models
As noticed in (Jiang et al., 2012), the posterior distribution by Bayes’ rule is equivalent to the solution of an information theoretical optimization problem
Logistic Supervised Topic Models
supervised topic model (MedLDA) (Jiang et al., 2012), which has the same form of the optimization problems .
optimization problem is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
DeNero, John and Macherey, Klaus
Model Inference
The Lagrangian relaxation of this optimization problem is L(a, b, C(a), (:00), u) =
Model Inference
We can form a dual problem that is an upper bound on the original optimization problem by swapping the order of min and max.
Model Inference
Hence, it is also a solution to our original optimization problem : Equation 3.
optimization problem is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Berant, Jonathan and Dagan, Ido and Adler, Meni and Goldberger, Jacob
Background
proved that this optimization problem , which we term Max-Trans-Graph, is NP-hard, and so described it as an Integer Linear Program (ILP).
Forest-reducible Graphs
Consequently, a natural variant of the Max-Trans-Graph problem is to restrict the required output graph of the optimization problem (1) to an FRG.
Introduction
(2010) formulated the problem of learning entailment rules as a graph optimization problem , where nodes are predicates and edges represent entailment rules that respect transitivity.
optimization problem is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Vaswani, Ashish and Huang, Liang and Chiang, David
Method
Substituting back into (4) and dropping constant terms, we get the following optimization problem : minimize
Method
This optimization problem is non-convex, and we do not know of a closed-form solution.
Method
Gradient projection methods are attractive solutions to constrained optimization problems , particularly when the constraints on the parameters are simple (Bert-sekas, 1999).
optimization problem is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Yamaguchi, Hiroshi and Tanaka-Ishii, Kumiko
Calculation of Cross-Entropy
In this section, we briefly introduce two methods previously studied by (Juola, 1997) and (Teahan, 2000) as representative of the two types, and we further explain a modification that we integrate into the final optimization problem .
Conclusion
The segmentation task was modeled as an optimization problem of finding the best segment and language sequences to minimize the description length of a given text.
Introduction
This article presents one way to formulate the segmentation and identification problem as a combinatorial optimization problem ; specifically, to find the set of segments and their languages that minimizes the description length of a given multilingual text.
optimization problem is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Morita, Hajime and Sasano, Ryohei and Takamura, Hiroya and Okumura, Manabu
Budgeted Submodular Maximization with Cost Function
We argue that our optimization problem can be regarded as an extraction of subtrees rooted at a given node from a directed graph, instead of from a tree.
Conclusions and Future Work
We formalized a query-oriented summarization, which is a task in which one simultaneously performs sentence compression and extraction, as a new optimization problem : budgeted monotone nondecreasing submodular function maximization with a cost function.
Joint Model of Extraction and Compression
An optimization problem with this objective function cannot be regarded as an ILP problem because it contains nonlinear terms.
optimization problem is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Chen, Liwei and Feng, Yansong and Huang, Songfang and Qin, Yong and Zhao, Dongyan
Introduction
We formalize this procedure as a constrained optimization problem , which can be solved by many optimization frameworks.
The Framework
This is an NP-hard optimization problem .
The Framework
After the optimization problem is solved, we will obtain a list of selected candidate relations for each tuple, which will be our final output.
optimization problem is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Flanigan, Jeffrey and Thomson, Sam and Carbonell, Jaime and Dyer, Chris and Smith, Noah A.
Abstract
The method is based on a novel algorithm for finding a maximum spanning, connected subgraph, embedded within a Lagrangian relaxation of an optimization problem that imposes linguistically inspired constraints.
Related Work
tensions allow for higher-order (non—edge—local) features, often making use of relaxations to solve the NP-hard optimization problem .
Relation Identification
We frame the task as a constrained combinatorial optimization problem .
optimization problem is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Parikh, Ankur P. and Cohen, Shay B. and Xing, Eric P.
Abstract
Unfortunately, finding the global maximum for these objective functions is usually intractable (Cohen and Smith, 2012) which often leads to severe local optima problems (but see Gormley and Eisner, 2013).
Abstract
Directly attempting to maximize the likelihood unfortunately results in an intractable optimization problem and greedy heuristics are often employed (Harmeling and Williams, 2011).
Abstract
Therefore, the procedure to find a bracketing for a given POS tag a: is to first estimate the distance matrix sub-block fiww from raw text data (see §3.4), and then solve the optimization problem arg minueu using a variant of the Eisner-Satta algorithm where is identical to 00.0 in Eq.
optimization problem is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: