Index of papers in Proc. ACL that mention
  • highest scoring
McIntyre, Neil and Lapata, Mirella
Conclusions and Future Work
Story creation amounts to traversing the tree and selecting the nodes with the highest score .
Experimental Setup
To evaluate which system configuration was best, we asked two human evaluators to rate (on a 1—5 scale) stories produced in the following conditions: (a) score the candidate stories using the interest function first and then coherence (and vice versa), (b) score the stories simultaneously using both rankers and select the story with the highest score .
Experimental Setup
We also examined how best to prune the search space, i.e., by selecting the highest scoring stories, the lowest scoring one, or simply at random.
Experimental Setup
The results showed that the evaluators preferred the version of the system that applied both rankers simultaneously and maintained the highest scoring stories in the beam.
The Story Generator
Story generation amounts to traversing the tree and selecting the nodes with the highest score
The Story Generator
Once we reach the required length, the highest scoring story is presented to the user.
highest scoring is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Björkelund, Anders and Kuhn, Jonas
Incremental Search
works incrementally by making a left-to-right pass over the mentions, selecting for each mention the highest scoring antecedent.
Incremental Search
We sketch a proof that this decoder also returns the highest scoring tree.
Incremental Search
Second, this tree is the highest scoring tree.
Introducing Nonlocal Features
When only local features are used, greedy search (either with CLE or the best-first decoder) suffices to find the highest scoring tree.
Introducing Nonlocal Features
The subset of size k (the beam size) of the highest scoring expansions are retained and put back into the agenda for the next step.
highest scoring is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Choi, Jinho D. and McCallum, Andrew
Selectional branching
While generating T1, the parser adds tuples (slj,p2j), , (sum/,3) to a list A for each low confidence prediction plj given 31 j.4 Then, new transition sequences are generated by using the 19 highest scoring predictions in A, where bis the beam size.
Selectional branching
Once all transition sequences are generated, a parse tree is built from a sequence with the highest score .
Selectional branching
For each parsing state sij, a prediction is made by generating a feature vector xij E X , feeding it into a classifier C1 that uses a feature map (19(53, y) and a weight vector w to measure a score for each label y E y, and choosing a label with the highest score .
highest scoring is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Socher, Richard and Bauer, John and Manning, Christopher D. and Andrew Y., Ng
Introduction
This max-margin, structure-prediction objective (Taskar et al., 2004; Ratliff et al., 2007; Socher et al., 2011b) trains the CVG so that the highest scoring tree will be the correct tree: 9905,) = yi and its score will be larger up to a margin to other possible trees 3) 6 32cm):
Introduction
Intuitively, to minimize this objective, the score of the correct tree 3/,- is increased and the score of the highest scoring incorrect tree 3) is decreased.
Introduction
This score will be used to find the highest scoring tree.
highest scoring is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Xu, Wenduan and Clark, Stephen and Zhang, Yue
Experiments
Again, our parser achieves the highest scores across all metrics (for both the full and reduced test sets), except for precision and lexical category assignment, where Z&C performed better.
Shift-Reduce with Beam-Search
First we describe the deterministic process which a parser would follow when tracing out a single, correct derivation; then we describe how a model of normal-form derivations — or, more accurately, a sequence of shift-reduce actions leading to a normal-form derivation —can be used with beam-search to develop a nondeterministic parser which selects the highest scoring sequence of actions.
Shift-Reduce with Beam-Search
An item becomes a candidate output once it has an empty queue, and the parser keeps track of the highest scored candidate output and returns the best one as the final output.
The Dependency Model
5 In Algorithm 3 we abuse notation by using HG [0] to denote the highest scoring gold item in the set.
The Dependency Model
We choose to reward the highest scoring gold item, in line with the violation-fixing framework; and penalize the highest scoring incorrect item, using the standard perceptron update.
highest scoring is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Rush, Alexander M. and Collins, Michael
A Simple Lagrangian Relaxation Algorithm
(2) find the highest scoring derivation using dynamic programming
A Simple Lagrangian Relaxation Algorithm
In the first step we compute the highest scoring incoming bigram for each leaf 2).
The Full Algorithm
(i.e., for each 2), compute the highest scoring trigram path ending in v.)
The Full Algorithm
The first step involves finding the highest scoring incoming trigram path for each leaf 2).
highest scoring is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Pei, Wenzhe and Ge, Tao and Chang, Baobao
Max-Margin Tensor Neural Network
For a given training instance (sci, yi), we search for the tag sequence with the highest score:
Max-Margin Tensor Neural Network
The object of Max-Margin training is that the highest scoring tag sequence is the correct one: y* = yz- and its score will be larger up to a margin to other possible tag sequences 3) E Y(:ci):
Max-Margin Tensor Neural Network
By minimizing this object, the score of the correct tag sequence yz- is increased and score of the highest scoring incorrect tag sequence 3) is decreased.
highest scoring is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Raghavan, Preethi and Fosler-Lussier, Eric and Elhadad, Noémie and Lai, Albert M.
Problem Description
The best hypothesis corresponds to the highest scoring path which can be obtained using shortest path algorithms like Djik—stra’s algorithm.
Problem Description
Backtracking starts at the highest scoring matrix cell and proceeds until a cell with score zero is encountered, yielding the highest scoring local alignment.
Problem Description
For each patient and each method (WFST or dynamic programming), the output timeline to evaluate is the highest scoring candidate hypothesis derived as described above.
highest scoring is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Szpektor, Idan and Dagan, Ido and Bar-Haim, Roy and Goldberger, Jacob
Contextual Preferences Models
The score of matching two 0pm lists, denoted here SCBC(-, -), is the score of the highest scoring member that appears in both lists.
Contextual Preferences Models
However, it can have several preferred types for h. When matching h with t, j’s match score is that of its highest scoring type, and the final score is the product of all variable scores: mv;n(h,t) = 11-60mm) (maXaeCth) [j] [8(a) va:n(t) UM)-
Results and Analysis
the highest score in the table.
highest scoring is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Galley, Michel and Manning, Christopher D.
Dependency parsing for machine translation
When there is no need to ensure projectivity, one can independently select the highest scoring edge (i, j) for each modifier x j, yet we generally want to ensure that the resulting structure is a tree, i.e., that it does not contain any circular dependencies.
Dependency parsing for machine translation
The main idea behind the CLE algorithm is to first greedily select for each word x j the incoming edge (i, j) with highest score , then to successively repeat the following two steps: (a) identify a loop in the graph, and if there is none, halt; (b) contract the loop into a single vertex, and update scores for edges coming in and out of the loop.
Dependency parsing for machine translation
The greedy approach of selecting the highest scoring edge (i, j) for each modifier xj can easily be applied left-to-right during phrase-based decoding, which proceeds in the same order.
highest scoring is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhao, Qiuye and Marcus, Mitch
Abstract
A character-based CWS decoder is to find the highest scoring tag sequence t over the input character sequence c, i.e.
Abstract
A word-based CWS decoder finds the highest scoring segmentation sequence w that is composed by the input character sequence c, i.e.
Abstract
From this view, the highest scoring tagging sequence is computed subject to structural constraints, giving us an inference alternative to Viterbi decoding.
highest scoring is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Fader, Anthony and Zettlemoyer, Luke and Etzioni, Oren
Learning
We use a hidden variable version of the perceptron algorithm (Collins, 2002), where the model parameters are updated using the highest scoring derivation y* that will generate the correct query 2 using the learned lexicon L.
Question Answering Model
For the end-to-end QA task, we return a ranked list of answers from the k highest scoring queries.
Question Answering Model
We score an answer a with the highest score of all derivations that generate a query with answer a.
highest scoring is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kundu, Gourab and Srikumar, Vivek and Roth, Dan
Experiments and Results
For the margin-based algorithm and the Theorem 1 from (Srikumar et al., 2012), for a new inference problem p N [P], we retrieve all inference problems from the database that belong to the same equivalence class [P] as the test problem p and find the cached assignment y that has the highest score according to the coefficients of p. We only consider cached ILPs whose solution is y for checking the conditions of the theorem.
Introduction
For a new inference problem, if this margin is larger than the sum of the decrease in the score of the previous prediction and any increase in the score of the second best one, then the previous solution will be the highest scoring one for the new problem.
Problem Definition and Notation
The goal of inference is to find the highest scoring global assignment of the variables from a feasible set of assignments, which is defined by linear inequalities.
highest scoring is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Sartorio, Francesco and Satta, Giorgio and Nivre, Joakim
Dependency Parser
Then the algorithm uses the function score() to evaluate all transitions that can be applied under the current configuration 0 = (a, [3 , A), and it applies the transition with the highest score , updating the current configuration.
Model and Training
Algorithm 2 parses 21) following Algorithm 1 and using the current (.3, until the highest score selected transition bestT is incorrect according to A 9.
Model and Training
When this happens, (I) is updated by decreasing the weights of the features associated with the incorrect bestT and by increasing the weights of the features associated with the transition bestCorrectT having the highest score among all possible correct transitions.
highest scoring is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Bhat, Suma and Xue, Huichao and Yoon, Su-Youn
Experimental Setup
We used the ASR data set to train a POS-bigram VSM for the highest score class and generated 0034 and cosmazc reported in Yoon and Bhat (2012), for the SM data set as outlined in Section 4.1.
Models for Measuring Grammatical Competence
0 0034: the cosine similarity score between the test response and the vector of POS bigrams for the highest score class (level 4); and,
Models for Measuring Grammatical Competence
The measure of syntactic complexity of a response, 0034, is its similarity to the highest score class.
highest scoring is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Li, Sujian and Wang, Liang and Cao, Ziqiang and Li, Wenjie
Discourse Dependency Parsing
Thus, the optimal dependency tree for T is a spanning tree with the highest score and obtained through the function DT(T,w): DT(T, w) = argmaxGT gVXRMO score(T, GT)
Discourse Dependency Parsing
A dynamic programming table E[i,j,d,c] is used to represent the highest scored subtree spanning e,- to ej.
Discourse Dependency Parsing
Each node in the graph greedily selects the incoming arc with the highest score .
highest scoring is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: