Index of papers in Proc. ACL 2014 that mention
  • objective function
Hermann, Karl Moritz and Blunsom, Phil
Approach
Further, these approaches typically depend on specific semantic signals such as sentiment- or topic-labels for their objective functions .
Approach
This results in the following objective function:
Approach
The objective function in Equation 2 could be coupled with any two given vector composition functions f, g from the literature.
Conclusion
To summarize, we have presented a novel method for learning multilingual word embeddings using parallel data in conjunction with a multilingual objective function for compositional vector models.
Overview
We describe a multilingual objective function that uses a noise-contrastive update between semantic representations of different languages to learn these word embeddings.
objective function is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Christensen, Janara and Soderland, Stephen and Bansal, Gagan and Mausam
Introduction
SUMMA hierarchically clusters the sentences by time, and then summarizes the clusters using an objective function that optimizes salience and coherence.
Summarizing Within the Hierarchy
4.4 Objective Function
Summarizing Within the Hierarchy
Having estimated salience, redundancy, and two forms of coherence, we can now put this information together into a single objective function that measures the quality of a candidate hierarchical summary.
Summarizing Within the Hierarchy
Intuitively, the objective function should balance salience and coherence.
objective function is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Cohen, Shay B. and Collins, Michael
Additional Details of the Algorithm
Next, we modify the objective function in Eq. '
Additional Details of the Algorithm
Thus the new objective function consists of a sun of L x M 2 terms, each corresponding to a differen combination of inside and outside features.
Introduction
2) Optimization of a convex objective function using EM.
The Learning Algorithm for L-PCFGS
Step 2: Use the EM algorithm to find 75 values that maximize the objective function in Eq.
objective function is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Fyshe, Alona and Talukdar, Partha P. and Murphy, Brian and Mitchell, Tom M.
Experimental Results
For a given value of 6 we solve the NNSE(Text) and J NNSE(Brain+Text) objective function as detailed in Equation 1 and 4 respectively.
Joint NonNegative Sparse Embedding
new objective function is:
Joint NonNegative Sparse Embedding
With A or D fixed, the objective function for NNSE(Text) and JNNSE(Brain+Text) is convex.
NonNegative Sparse Embedding
NNSE solves the following objective function:
objective function is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Lazaridou, Angeliki and Bruni, Elia and Baroni, Marco
Experimental Setup
The weights are estimated by minimizing the objective function
Results
(2013), however, our objective function yielded consistently better results in all experimental settings.
Results
8For this post-hoc analysis, we include a sparsity parameter in the objective function of Equation 5 in order to get more interpretable results; hidden units are therefore maximally activated by a only few concepts.
Results
The adaptation of NN is straightforward; the new objective function is derived as
objective function is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Ma, Xuezhe and Xia, Fei
Experiments
For projective parsing, several algorithms (McDonald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010; Ma and Zhao, 2012) have been proposed to solve the model training problems (calculation of objective function and gradient) for different factorizations.
Our Approach
We introduce a multiplier 7 as a tradeoff between the two contributions (parallel and unsupervised) of the objective function K, and the final objective function K I has the following form:
Our Approach
To train our parsing model, we need to find out the parameters A that minimize the objective function K I in equation (11).
Our Approach
objective function and the gradient of the objective function .
objective function is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Nguyen, Thang and Hu, Yuening and Boyd-Graber, Jordan
Adding Regularization
In this section, we briefly review regularizers and then add two regularizers, inspired by Gaussian (L2, Section 3.1) and Dirichlet priors (Beta, Section 3.2), to the anchor objective function (Equation 3).
Adding Regularization
Instead of optimizing a function just of the data cc and parameters 6, f (cc, 6), one optimizes an objective function that includes a regularizer that is only a function of parameters: f (w, 6) + 716).
Adding Regularization
This requires including the topic matrix as part of the objective function .
Anchor Words: Scalable Topic Models
Once we have established the anchor objective function, in the next section we regularize the objective function .
objective function is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Chang, Yin-Wen and Rush, Alexander M. and DeNero, John and Collins, Michael
Background
Given a sentence e of length |e| = I and a sentence f of length |f| = J, our goal is to find the best bidirectional alignment between the two sentences under a given objective function .
Background
The HMM objective function f : X —> R can be written as a linear function of :c
Background
Similarly define the objective function
objective function is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Chaturvedi, Snigdha and Goldwasser, Dan and Daumé III, Hal
Intervention Prediction Models
Similar to the traditional maximum margin based Support Vector Machine (SVM) formulation, our model’s objective function is defined as:
Intervention Prediction Models
Replacing the term fw (253,193) with the contents of Equation 1 in the minimization objective above, reveals the key difference from the traditional SVM formulation - the objective function has a maximum term inside the global minimization problem making it non-convex.
Intervention Prediction Models
The algorithm then performs two step iteratively - first it determines the structural assignments for the negative examples, and then optimizes the fixed objective function using a cutting plane algorithm.
objective function is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Devlin, Jacob and Zbib, Rabih and Huang, Zhongqiang and Lamar, Thomas and Schwartz, Richard and Makhoul, John
Model Variations
For MT feature weight optimization, we use iterative k-best optimization with an Expected-BLEU objective function (Rosti et al., 2010).
Neural Network Joint Model (NNJ M)
While we cannot train a neural network with this guarantee, we can explicitly encourage the log-softmaX normalizer to be as close to 0 as possible by augmenting our training objective function:
Neural Network Joint Model (NNJ M)
Note that 04 = 0 is equivalent to the standard neural network objective function .
objective function is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Flanigan, Jeffrey and Thomson, Sam and Carbonell, Jaime and Dyer, Chris and Smith, Noah A.
Relation Identification
The score of graph G (encoded as 2) can be written as the objective function quz, where gbe = ¢Tg(e).
Relation Identification
To handle the constraint Az g b, we introduce multipliers p 2 0 to get the Lagrangian relaxation of the objective function:
Relation Identification
L(z) is an upper bound on the unrelaxed objective function quz, and is equal to it if and only if the constraints AZ g b are satisfied.
objective function is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Iyyer, Mohit and Enns, Peter and Boyd-Graber, Jordan and Resnik, Philip
Datasets
Due to this discrepancy, the objective function in Eq.
Experiments
For this model, we also introduce a hyperparameter 6 that weights the error at annotated nodes (1 — 6) higher than the error at unannotated nodes (6); since we have more confidence in the annotated labels, we want them to contribute more towards the objective function .
Recursive Neural Networks
This induces a supervised objective function over all sentences: a regularized sum over all node losses normalized by the number of nodes N in the training set,
objective function is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kang, Jun Seok and Feng, Song and Akoglu, Leman and Choi, Yejin
Pairwise Markov Random Fields and Loopy Belief Propagation
We next define our objective function .
Pairwise Markov Random Fields and Loopy Belief Propagation
and x to observed ones X (variables with known labels, if any), our objective function is associated with the following joint probability distribution
Pairwise Markov Random Fields and Loopy Belief Propagation
Finding the best assignments to unobserved variables in our objective function is the inference problem.
objective function is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Lo, Chi-kiu and Beloucif, Meriem and Saers, Markus and Wu, Dekai
Abstract
However, to go beyond tuning weights in the loglinear SMT model, a cross-lingual objective function that can deeply integrate semantic frame criteria into the MT training pipeline is needed.
Conclusion
While monolingual MEANT alone accurately reflects adequacy via semantic frames and optimizing SMT against MEANT improves translation, the new cross-lingual XMEANT semantic objective function moves closer toward deep integration of semantics into the MT training pipeline.
Introduction
In order to continue driving MT towards better translation adequacy by deeply integrating semantic frame criteria into the MT training pipeline, it is necessary to have a cross-lingual semantic objective function that assesses the semantic frame similarities of input and output sentences.
objective function is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhang, Jiajun and Liu, Shujie and Li, Mu and Zhou, Ming and Zong, Chengqing
Bilingually-constrained Recursive Auto-encoders
After that, we introduce the BRAE on the network structure, objective function and parameter inference.
Bilingually-constrained Recursive Auto-encoders
In the semi-supervised RAE for phrase embedding, the objective function over a (phrase, label) pair (av, 25) includes the reconstruction error and the prediction error, as illustrated in Fig.
Bilingually-constrained Recursive Auto-encoders
3.3.1 The Objective Function
objective function is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: