Index of papers in Proc. ACL 2013 that mention
  • score function
Almeida, Miguel and Martins, Andre
Compressive Summarization
Here, we follow the latter work, by combining a coverage score function g with sentence-level compression score functions hl, .
Compressive Summarization
For the compression score function, we follow Martins and Smith (2009) and decompose it as a sum of local score functions pmg defined on dependency arcs:
Extractive Summarization
By designing a quality score function g : {0, 1}N —> R, this can be cast as a global optimization problem with a knapsack constraint:
Extractive Summarization
Then, the following quality score function is defined:
score function is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Morita, Hajime and Sasano, Ryohei and Takamura, Hiroya and Okumura, Manabu
Conclusions and Future Work
Since our algorithm requires that the objective function is the sum of word score functions , our proposed method has a restriction that we cannot use an arbitrary monotone submodular function as the objective function for the summary.
Introduction
By formalizing the subtree extraction problem as this new maximization problem, we can treat the constraints regarding the grammaticality of the compressed sentences in a straightforward way and use an arbitrary monotone submodular word score function for words including our word score function (shown later).
Joint Model of Extraction and Compression
The score function is supermodular as a score function of subtree extraction3, because the union of two subtrees can have extra word pairs that are not included in either subtree.
Joint Model of Extraction and Compression
Our score function for a summary 8 is as follows:
score function is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Sartorio, Francesco and Satta, Giorgio and Nivre, Joakim
Dependency Parser
We present here an algorithm that runs the parser in pseudo-deterministic mode, greedily choosing at each configuration the transition that maximizes some score function .
Dependency Parser
Algorithm 1 takes as input a string 21) and a scoring function score() defined over parser transitions and parser configurations.
Dependency Parser
The scoring function will be the subject of §4 and is not discussed here.
Model and Training
We use a linear model for the score function in Algorithm 1, and define score(t, c) = {I} - gb(t, 0).
score function is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Wang, Lu and Raghavan, Hema and Castelli, Vittorio and Florian, Radu and Cardie, Claire
Abstract
Under this framework, we show how to integrate various indicative metrics such as linguistic motivation and query relevance into the compression process by deriving a novel formulation of a compression scoring function .
Introduction
Our tree-based methods rely on a scoring function that allows for easy and flexible tailoring of sentence compression to the summarization task, ultimately resulting in significant improvements for MDS, while at the same time remaining competitive with existing methods in terms of sentence compression, as discussed next.
Sentence Compression
postorder) as a sequence of nodes in T, the set L of possible node labels, a scoring function 8 for evaluating each sentence compression hypothesis, and a beam size N. Specifically, O is a permutation on the set {0, l, .
Sentence Compression
Thus, the decoder is quite flexible — its learned scoring function allows us to incorporate features salient for sentence compression while its language model guarantees the linguistic quality of the compressed string.
score function is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zeng, Xiaodong and Wong, Derek F. and Chao, Lidia S. and Trancoso, Isabel
Abstract
The segmentation for an input sentence is decoded by using a joint scoring function combining the two induced models.
Introduction
Moreover, in order to better combine the strengths of the two models, the proposed approach uses a joint scoring function in a log-linear combination form for the decoding in the segmentation phase.
Semi-supervised Learning via Co-regularizing Both Models
3.4 The Joint Score Function for Decoding
Semi-supervised Learning via Co-regularizing Both Models
This paper employs a log-linear interpolation combination (Bishop, 2006) to formulate a joint scoring function based on character-based and word-based models in the decoding:
score function is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Özbal, Gözde and Pighin, Daniele and Strapparava, Carlo
Architecture of BRAINSUP
Each partially lexicalized solution is scored by a battery of scoring functions that compete to generate creative sentences respecting the user specification U, as explained in Section 3.3.
Architecture of BRAINSUP
Concerning the scoring of partial solutions and complete sentences, we adopt a simple linear combination of scoring functions .
Architecture of BRAINSUP
,fk] be the vector of scoring functions and w = [2120, .
score function is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: