Index of papers in Proc. ACL 2012 that mention
  • objective function
Kuznetsova, Polina and Ordonez, Vicente and Berg, Alexander and Berg, Tamara and Choi, Yejin
Image-level Content Planning
4.1 Variables and Objective Function The following set of indicator variables encodes the selection of objects and ordering:
Image-level Content Planning
The objective function , F, that we will maximize is a weighted linear combination of these indicator variables and can be optimized using integer linear programming:
Image-level Content Planning
We use IBM CPLEX to optimize this objective function subject to the constraints introduced next in §4.2.
Surface Realization
5.1 Variables and Objective Function The following set of variables encodes the selection of phrases and their ordering in constructing 5’ sentences.
Surface Realization
Finally, we define the objective function F as:
Surface Realization
the objective function (Eq.
objective function is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
He, Xiaodong and Deng, Li
Abstract
Objective function We denote by 0 the set of all the parameters to be optimized, including forward phrase and lexicon translation probabilities and their backward counterparts.
Abstract
Therefore, we design the objective function to be maximized as:
Abstract
First, we propose a new objective function (Eq.
objective function is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Liu, Chang and Ng, Hwee Tou
Discussion and Future Work
Accordingly, our objective function is replaced by:
The Algorithm
3.4 The Objective Function
The Algorithm
We now define our objective function in terms of the variables.
The Algorithm
We are also constrained by the linear programming framework, hence we set the objective function as
objective function is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Berant, Jonathan and Dagan, Ido and Adler, Meni and Goldberger, Jacob
Background
The objective function is the sum of weights over the edges of g and the constraint 137;]- + mjk — mik g 1 on the binary variables enforces that whenever 137; j = mjk = 1, then also 137;], = 1 (transitivity).
Sequential Approximation Algorithms
Then, at each iteration a single node v is reattached (see below) to the FRG in a way that improves the objective function .
Sequential Approximation Algorithms
This is repeated until the value of the objective function cannot be improved anymore by reattaching a node.
Sequential Approximation Algorithms
Clearly, at each reattachment the value of the objective function cannot decrease, since the optimization algorithm considers the previous graph as one of its candidate solutions.
objective function is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Whitney, Max and Sarkar, Anoop
Abstract
It is a bootstrapping learning method which uses a graph propagation algorithm with a well defined objective function .
Existing algorithms 3.1 Yarowsky
(2007) provide an objective function for this algorithm using a generalized definition of cross-entropy in terms of Bregman distance, which motivates our objective in section 4.
Graph propagation
6.5 Objective function
Introduction
Variants of this algorithm have been formalized as optimizing an objective function in previous work by Abney (2004) and Haffari and Sarkar (2007), but it is not clear that any perform as well as the Yarowsky algorithm itself.
Introduction
well-understood as minimizing an objective function at each iteration, and it obtains state of the art performance on several different NLP data sets.
objective function is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Sun, Xu and Wang, Houfeng and Li, Wenjie
Related Work
The SGD uses a small randomly-selected subset of the training samples to approximate the gradient of an objective function .
System Architecture
.n, parameter estimation is performed by maximizing the objective function,
System Architecture
The final objective function is as follows:
System Architecture
t E('wt) = 'w* + H (I — vofimH(w*))('wo — 10*), m=1 where w* is the optimal weight vector, and H is the Hessian matrix of the objective function .
objective function is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Guo, Weiwei and Diab, Mona
Limitations of Topic Models and LSA for Modeling Sentences
In effect, LSA allows missing and observed words to equally impact the objective function .
Limitations of Topic Models and LSA for Modeling Sentences
Moreover, the true semantics of the concept definitions is actually related to some missing words, but such true semantics will not be favored by the objective function , since equation 2 allows for too strong an impact by Xij = 0 for any missing word.
The Proposed Approach
The model parameters (vectors in P and Q) are optimized by minimizing the objective function:
objective function is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Vaswani, Ashish and Huang, Liang and Chiang, David
Method
With the addition of the (0 prior, the MAP (maximum a posteriori) objective function is
Method
Let F (6) be the objective function in
Method
(Note that we don’t allow m = 0 because this can cause 6" + 6m to land on the boundary of the probability simplex, where the objective function is undefined.)
objective function is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: