Experiments | of our system that approximates the submodular objective function proposed by (Lin and Bilmes, 2011).7 As shown in the results, our best system8 which uses the hs dispersion function achieves a better ROUGE-1 F-score than all other systems. |
Experiments | (3) We also analyze the contributions of individual components of the new objective function towards summarization performance by selectively setting certain parameters to 0. |
Experiments | However, since individual components within our objective function are parametrized it is easy to tune them for a specific task or genre. |
Framework | We start by describing a generic objective function that can be widely applied to several summarization scenarios. |
Framework | This objective function is the sum of a monotone submodular coverage function and a non-submodular dispersion function. |
Framework | We then describe a simple greedy algorithm for optimizing this objective function with provable approximation guarantees for three natural dispersion functions. |
Using the Framework | generate a graph and instantiate our summarization objective function with specific components that capture the desiderata of a given summarization task. |
Using the Framework | We model this property in our objective function as follows. |
Using the Framework | We then add this component to our objective function as w(S) = Zues Mu)- |
Budgeted Submodular Maximization with Cost Function | The algorithm it-eratively adds to the current summary the element 3,- that has the largest ratio of the objective function gain to the additional cost, unless adding it violates the budget constraint. |
Budgeted Submodular Maximization with Cost Function | After the loop, the algorithm compares Gi with the {3*} that has the largest value of the objective function among all subtrees that are under the budget, and it outputs the summary candidate with the largest value. |
Joint Model of Extraction and Compression | 4.1 Objective Function |
Joint Model of Extraction and Compression | We designed our objective function by combining this relevance score with a penalty for redundancy and too-compressed sentences. |
Joint Model of Extraction and Compression | The behavior can be represented by a submodular objective function that reduces word scores depending on those already included in the summary. |
Inference with First Order Variables | Express the inference objective as a linear objective function ; and |
Inference with First Order Variables | For the grammatical error correction task, the variables in ILP are indicators of the corrections that a word needs, the objective function measures how grammatical the whole sentence is if some corrections are accepted, and the constraints guarantee that the corrections do not conflict with each other. |
Inference with First Order Variables | 3.2 The Objective Function |
Inference with Second Order Variables | (17) A new objective function combines the weights from both first and second order variables: |
Introduction | Variables of ILP are indicators of possible grammatical error corrections, the objective function aims to select the best set of corrections, and the constraints help to enforce a valid and grammatical output. |
Our Approach | Wh611 noring the coupling between Vp, it can be solved by minimizing the objective function as follows: |
Our Approach | Combining equations (1) and (2), we get the following objective function: |
Our Approach | If we set a small value for Ap, the objective function behaves like the traditional NMF and the importance of data sparseness is emphasized; while a big value of Ap indicates Vp should be very closed to V1, and equation (3) aims to remove the noise introduced by statistical machine translation. |
Introduction | Gillick and Favre (Gillick and Favre, 2009) used bigrams as concepts, which are selected from a subset of the sentences, and their document frequency as the weight in the objective function . |
Proposed Method 2.1 Bigram Gain Maximization by ILP | where 0;, is an auxiliary variable we introduce that is equal to |nbflaef — :8 2(3) * 715,8 , and nbyef is a constant that can be dropped from the objective function . |
Proposed Method 2.1 Bigram Gain Maximization by ILP | To train this regression model using the given reference abstractive summaries, rather than trying to minimize the squared error as typically done, we propose a new objective function . |
Proposed Method 2.1 Bigram Gain Maximization by ILP | The objective function for training is thus to minimize the KL distance: |
Related Work | They used a modified objective function in order to consider whether the selected sentence is globally optimal. |
Background | This objective function can be optimized by the stochastic gradient method or other numerical optimization methods. |
Method | The squared-loss criterion1 is used to formulate the objective function . |
Method | Thus, the objective function can be optimized by L-BFGS-B (Zhu et al., 1997), a generic quasi-Newton gradient-based optimizer. |
Method | The first term in Equation (5) is the same as Equation (2), which is the traditional CRFs leam-ing objective function on the labeled data. |
Related Work | And third, the derived label information from the graph is smoothed into the model by optimizing a modified objective function . |
Connotation Induction Algorithms | Objective function : We aim to maximize: F : (pprosody + (pcoord + (Dneu |
Connotation Induction Algorithms | (Dneu : a Z wzged _ zj m Soft constraints (edge weights): The weights in the objective function are set as follows: |
Precision, Coverage, and Efficiency | Objective function : We aim to maximize: |
Precision, Coverage, and Efficiency | Hard constraints We add penalties to the objective function if the polarity of a pair of words is not consistent with its corresponding semantic relations. |
Precision, Coverage, and Efficiency | Notice that dszjlr, d317,; satisfying above inequalities will be always of negative values, hence in order to maximize the objective function , the LP solver will try to minimize the absolute values of dsjj, dsgf, effectively pushing i and j toward the same polarity. |
Bilingually-Guided Dependency Grammar Induction | In that case, we can use a single parameter 04 to control both weights for different objective functions . |
Bilingually-Guided Dependency Grammar Induction | When 04 = 1 it is the unsupervised objective function in Formula (6). |
Bilingually-Guided Dependency Grammar Induction | Contrary, if 04 = 0, it is the projection objective function (Formula (7)) for projected instances. |
Unsupervised Dependency Grammar Induction | We select a simple classifier objective function as the unsupervised objective function which is instinctively in accordance with the parsing objective: |
Introduction | Finally, the evolution of the objective function of the adapted K -means is modeled to automatically define the “best” number of clusters. |
Polythetic Post-Retrieval Clustering | To assure convergence, an objective function Q is defined which decreases at each processing step. |
Polythetic Post-Retrieval Clustering | The classical objective function is defined in Equation (1) where wk, is a cluster labeled k, xi 6 wk, is an object in the cluster, mm is the centroid of the cluster wk, and E(., is the Euclidean distance. |
Polythetic Post-Retrieval Clustering | A direct consequence of the change in similarity measure is the definition of a new objective function Q53 to ensure convergence. |
Model | The objective function is defined as a linear combination of the potentials from different predictors with a parameter A to balance the contribution of two components: opinion entity identification and opinion relation extraction. |
Results | The objective function of ILP-W/O-ENTITY can be represented as |
Results | For ILP-W-SINGLE-RE, we simply remove the variables associated with one opinion relation in the objective function (1) and constraints. |
Results | The formulation of ILP-W/O-IMPLICIT—RE removes the variables associated with potential 7“,- in the objective function and corresponding constraints. |
Learning | The gradient of the regularised objective function then becomes: |
Learning | We learn the gradient using backpropagation through structure (Goller and Kuchler, 1996), and minimize the objective function using L-BFGS. |
Learning | pred(l=l|v, 6) = Singid(Wlabel ’U + blabel) (9) Given our corpus of CCG parses with label pairs (N, l), the new objective function becomes: |
Deceptive Answer Prediction with User Preference Graph | The best parameters w* can be found by minimizing the following objective function: |
Deceptive Answer Prediction with User Preference Graph | The new objective function is changed as: |
Deceptive Answer Prediction with User Preference Graph | In the above objective function , we impose a user graph regularization term |
Our Proposed Approach | The objective function can be transformed |
Our Proposed Approach | Given the low-dimensional semantic representation of the test data, an objective function can be defined as follows: |
Our Proposed Approach | The story boundaries which minimize the objective function 8 in Eq. |
WTMF on Graphs | To implement this, we add a regularization term in the objective function of WTMF (equation 2) for each linked pairs ijl , 6233-2: |
WTMF on Graphs | Therefore we approximate the objective function by treating the vector length |Q.,j as fixed values during the ALS iterations: |
Weighted Textual Matrix Factorization | P and Q are optimized by minimize the objective function: |
HMM alignment | Let us rewrite the objective function as follows: |
HMM alignment | Note how this recovers the original objective function when matching variables are found. |
Introduction | This captures the positional information in the IBM models in a framework that admits exact parameter estimation inference, though the objective function is not concave: local maxima are a concern. |
Introduction | We will first briefly introduce single word vector representations and then describe the CVG objective function , tree scoring and inference. |
Introduction | The main objective function in Eq. |
Introduction | The objective function is not differentiable due to the hinge loss. |
Surface Realization | Input : relation instances R = {(indi,argi>}£f=1, generated abstracts A = {absfijib objective function f , cost function G |
Surface Realization | We employ the following objective function: |
Surface Realization | Algorithm 1 sequentially finds an abstract with the greatest ratio of objective function gain to length, and add it to the summary if the gain is nonnegative. |
Abstract | Our analysis shows that its objective function can be efficiently approximated using the negative empirical pointwise mutual information. |
Concluding Remarks | In this paper, we derive a new lower-bound approximate to the objective function used in the regularized compression algorithm. |
Proposed Method | The new objective function is written out as Equation (4). |