Index of papers in Proc. ACL 2014 that mention
  • highest scoring
Björkelund, Anders and Kuhn, Jonas
Incremental Search
works incrementally by making a left-to-right pass over the mentions, selecting for each mention the highest scoring antecedent.
Incremental Search
We sketch a proof that this decoder also returns the highest scoring tree.
Incremental Search
Second, this tree is the highest scoring tree.
Introducing Nonlocal Features
When only local features are used, greedy search (either with CLE or the best-first decoder) suffices to find the highest scoring tree.
Introducing Nonlocal Features
The subset of size k (the beam size) of the highest scoring expansions are retained and put back into the agenda for the next step.
highest scoring is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Xu, Wenduan and Clark, Stephen and Zhang, Yue
Experiments
Again, our parser achieves the highest scores across all metrics (for both the full and reduced test sets), except for precision and lexical category assignment, where Z&C performed better.
Shift-Reduce with Beam-Search
First we describe the deterministic process which a parser would follow when tracing out a single, correct derivation; then we describe how a model of normal-form derivations — or, more accurately, a sequence of shift-reduce actions leading to a normal-form derivation —can be used with beam-search to develop a nondeterministic parser which selects the highest scoring sequence of actions.
Shift-Reduce with Beam-Search
An item becomes a candidate output once it has an empty queue, and the parser keeps track of the highest scored candidate output and returns the best one as the final output.
The Dependency Model
5 In Algorithm 3 we abuse notation by using HG [0] to denote the highest scoring gold item in the set.
The Dependency Model
We choose to reward the highest scoring gold item, in line with the violation-fixing framework; and penalize the highest scoring incorrect item, using the standard perceptron update.
highest scoring is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Pei, Wenzhe and Ge, Tao and Chang, Baobao
Max-Margin Tensor Neural Network
For a given training instance (sci, yi), we search for the tag sequence with the highest score:
Max-Margin Tensor Neural Network
The object of Max-Margin training is that the highest scoring tag sequence is the correct one: y* = yz- and its score will be larger up to a margin to other possible tag sequences 3) E Y(:ci):
Max-Margin Tensor Neural Network
By minimizing this object, the score of the correct tag sequence yz- is increased and score of the highest scoring incorrect tag sequence 3) is decreased.
highest scoring is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Raghavan, Preethi and Fosler-Lussier, Eric and Elhadad, Noémie and Lai, Albert M.
Problem Description
The best hypothesis corresponds to the highest scoring path which can be obtained using shortest path algorithms like Djik—stra’s algorithm.
Problem Description
Backtracking starts at the highest scoring matrix cell and proceeds until a cell with score zero is encountered, yielding the highest scoring local alignment.
Problem Description
For each patient and each method (WFST or dynamic programming), the output timeline to evaluate is the highest scoring candidate hypothesis derived as described above.
highest scoring is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Bhat, Suma and Xue, Huichao and Yoon, Su-Youn
Experimental Setup
We used the ASR data set to train a POS-bigram VSM for the highest score class and generated 0034 and cosmazc reported in Yoon and Bhat (2012), for the SM data set as outlined in Section 4.1.
Models for Measuring Grammatical Competence
0 0034: the cosine similarity score between the test response and the vector of POS bigrams for the highest score class (level 4); and,
Models for Measuring Grammatical Competence
The measure of syntactic complexity of a response, 0034, is its similarity to the highest score class.
highest scoring is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Li, Sujian and Wang, Liang and Cao, Ziqiang and Li, Wenjie
Discourse Dependency Parsing
Thus, the optimal dependency tree for T is a spanning tree with the highest score and obtained through the function DT(T,w): DT(T, w) = argmaxGT gVXRMO score(T, GT)
Discourse Dependency Parsing
A dynamic programming table E[i,j,d,c] is used to represent the highest scored subtree spanning e,- to ej.
Discourse Dependency Parsing
Each node in the graph greedily selects the incoming arc with the highest score .
highest scoring is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: