Abstract | Much of the recent work on dependency parsing has been focused on solving inherent combinatorial problems associated with rich scoring functions . |
Abstract | In contrast, we demonstrate that highly expressive scoring functions can be used with substantially simpler inference procedures. |
Introduction | Dependency parsing is commonly cast as a maximization problem over a parameterized scoring function . |
Introduction | In this view, the use of more expressive scoring functions leads to more challenging combinatorial problems of finding the maximizing parse. |
Introduction | We depart from this view and instead focus on using highly expressive scoring functions with substantially simpler inference procedures. |
Boosting-style algorithm | The predictor CHEW“ returned by our boosting algorithm is based on a scoring function h: X x y —> R, which, as for standard ensemble algorithms such as AdaBoost,Ai/s a~convex combination of base scoring functions ht: h 2 23:1 atht, with at 2 0. |
Boosting-style algorithm | The base scoring functions used in our algorithm have the form |
Boosting-style algorithm | Thus, the~score assigned to y by the base scoring function ht is the number of positions at which y matches the prediction of path expert ht given input X. CHEW“ is defined as follows in terms of h or hts: |
Online learning approach | A collection of distributions 1P can also be used to define a deterministic prediction rule based on the scoring function approach. |
Online learning approach | The majority vote scoring function is defined by |
Structured Taxonomy Induction | Each factor F has an associated scoring function W, with the probability of a total assignment determined by the product of all these scores: |
Structured Taxonomy Induction | We score each edge by extracting a set of features f (55¢, :33) and weighting them by the (learned) weight vector w. So, the factor scoring function is: |
Structured Taxonomy Induction | The scoring function is similar to the one above: |