Selectional branching | While generating T1, the parser adds tuples (slj,p2j), , (sum/,3) to a list A for each low confidence prediction plj given 31 j.4 Then, new transition sequences are generated by using the 19 highest scoring predictions in A, where bis the beam size. |
Selectional branching | Once all transition sequences are generated, a parse tree is built from a sequence with the highest score . |
Selectional branching | For each parsing state sij, a prediction is made by generating a feature vector xij E X , feeding it into a classifier C1 that uses a feature map (19(53, y) and a weight vector w to measure a score for each label y E y, and choosing a label with the highest score . |
Introduction | This max-margin, structure-prediction objective (Taskar et al., 2004; Ratliff et al., 2007; Socher et al., 2011b) trains the CVG so that the highest scoring tree will be the correct tree: 9905,) = yi and its score will be larger up to a margin to other possible trees 3) 6 32cm): |
Introduction | Intuitively, to minimize this objective, the score of the correct tree 3/,- is increased and the score of the highest scoring incorrect tree 3) is decreased. |
Introduction | This score will be used to find the highest scoring tree. |
Learning | We use a hidden variable version of the perceptron algorithm (Collins, 2002), where the model parameters are updated using the highest scoring derivation y* that will generate the correct query 2 using the learned lexicon L. |
Question Answering Model | For the end-to-end QA task, we return a ranked list of answers from the k highest scoring queries. |
Question Answering Model | We score an answer a with the highest score of all derivations that generate a query with answer a. |
Experiments and Results | For the margin-based algorithm and the Theorem 1 from (Srikumar et al., 2012), for a new inference problem p N [P], we retrieve all inference problems from the database that belong to the same equivalence class [P] as the test problem p and find the cached assignment y that has the highest score according to the coefficients of p. We only consider cached ILPs whose solution is y for checking the conditions of the theorem. |
Introduction | For a new inference problem, if this margin is larger than the sum of the decrease in the score of the previous prediction and any increase in the score of the second best one, then the previous solution will be the highest scoring one for the new problem. |
Problem Definition and Notation | The goal of inference is to find the highest scoring global assignment of the variables from a feasible set of assignments, which is defined by linear inequalities. |
Dependency Parser | Then the algorithm uses the function score() to evaluate all transitions that can be applied under the current configuration 0 = (a, [3 , A), and it applies the transition with the highest score , updating the current configuration. |
Model and Training | Algorithm 2 parses 21) following Algorithm 1 and using the current (.3, until the highest score selected transition bestT is incorrect according to A 9. |
Model and Training | When this happens, (I) is updated by decreasing the weights of the features associated with the incorrect bestT and by increasing the weights of the features associated with the transition bestCorrectT having the highest score among all possible correct transitions. |