Index of papers in Proc. ACL 2013 that mention
  • optimization problem
Eidelman, Vladimir and Marton, Yuval and Resnik, Philip
Learning in SMT
The usual presentation of MIRA’s optimization problem is given as a quadratic program:
Learning in SMT
While solving the optimization problem relies on computing the margin between the correct output yi, and y’, in SMT our decoder is often incapable of producing the reference translation, i.e.
Learning in SMT
In this setting, the optimization problem becomes:
The Relative Margin Machine in SMT
The online latent structured soft relative margin optimization problem is then:
The Relative Margin Machine in SMT
The dual in Equation (5) can be optimized using a cutting plane algorithm, an effective method for solving a relaxed optimization problem in the dual, used in Structured SVM, MIRA, and RMM (Tsochantaridis et al., 2004; Chiang, 2012; Shivaswamy and Jebara, 2009a).
optimization problem is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Zhou, Guangyou and Liu, Fang and Liu, Yang and He, Shizhu and Zhao, Jun
Our Approach
By solving the optimization problem in equation (4), we can get the reduced representation of terms and questions.
Our Approach
,U p fixed, the update of Up amounts to the following optimization problem:
Our Approach
Thus, the optimization of equation (5) can be decomposed into Mp optimization problems that can be solved independently, with each corresponding to one row of Up:
optimization problem is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Almeida, Miguel and Martins, Andre
Compressive Summarization
In previous work, the optimization problem in Eq.
Extractive Summarization
By designing a quality score function g : {0, 1}N —> R, this can be cast as a global optimization problem with a knapsack constraint:
Extractive Summarization
1, one obtains the following Boolean optimization problem:
Introduction
els (maximizing relevance, and penalizing redundancy) lead to submodular optimization problems (Lin and Bilmes, 2010), which are NP-hard but ap-proximable through greedy algorithms; learning is possible with standard structured prediction algorithms (Sipos et al., 2012; Lin and Bilmes, 2012).
optimization problem is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhu, Jun and Zheng, Xun and Zhang, Bo
Introduction
Technically, instead of doing standard Bayesian inference via Bayes’ rule, which requires a normalized likelihood model, we propose to do regularized Bayesian inference (Zhu et al., 2011; Zhu et al., 2013b) via solving an optimization problem , where the posterior regularization is defined as an expectation of a logistic loss, a surrogate loss of the expected misclassification error; and a regularization parameter is introduced to balance the surrogate classification loss (i.e., the response log-likelihood) and the word likelihood.
Introduction
2 introduces logistic supervised topic models as a general optimization problem .
Logistic Supervised Topic Models
As noticed in (Jiang et al., 2012), the posterior distribution by Bayes’ rule is equivalent to the solution of an information theoretical optimization problem
Logistic Supervised Topic Models
supervised topic model (MedLDA) (Jiang et al., 2012), which has the same form of the optimization problems .
optimization problem is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Morita, Hajime and Sasano, Ryohei and Takamura, Hiroya and Okumura, Manabu
Budgeted Submodular Maximization with Cost Function
We argue that our optimization problem can be regarded as an extraction of subtrees rooted at a given node from a directed graph, instead of from a tree.
Conclusions and Future Work
We formalized a query-oriented summarization, which is a task in which one simultaneously performs sentence compression and extraction, as a new optimization problem : budgeted monotone nondecreasing submodular function maximization with a cost function.
Joint Model of Extraction and Compression
An optimization problem with this objective function cannot be regarded as an ILP problem because it contains nonlinear terms.
optimization problem is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: