Index of papers in Proc. ACL 2014 that mention

**scoring function**

Abstract | Much of the recent work on dependency parsing has been focused on solving inherent combinatorial problems associated with rich scoring functions . |

Abstract | In contrast, we demonstrate that highly expressive scoring functions can be used with substantially simpler inference procedures. |

Introduction | Dependency parsing is commonly cast as a maximization problem over a parameterized scoring function . |

Introduction | In this view, the use of more expressive scoring functions leads to more challenging combinatorial problems of finding the maximizing parse. |

Introduction | We depart from this view and instead focus on using highly expressive scoring functions with substantially simpler inference procedures. |

scoring function is mentioned in 29 sentences in this paper.

Topics mentioned in this paper:

- scoring function (29)
- POS tags (19)
- reranker (16)

Boosting-style algorithm | The predictor CHEW“ returned by our boosting algorithm is based on a scoring function h: X x y —> R, which, as for standard ensemble algorithms such as AdaBoost,Ai/s a~convex combination of base scoring functions ht: h 2 23:1 atht, with at 2 0. |

Boosting-style algorithm | The base scoring functions used in our algorithm have the form |

Boosting-style algorithm | Thus, the~score assigned to y by the base scoring function ht is the number of positions at which y matches the prediction of path expert ht given input X. CHEW“ is defined as follows in terms of h or hts: |

Online learning approach | A collection of distributions 1P can also be used to define a deterministic prediction rule based on the scoring function approach. |

Online learning approach | The majority vote scoring function is defined by |

scoring function is mentioned in 8 sentences in this paper.

Topics mentioned in this paper:

- loss function (10)
- scoring function (8)
- machine learning (3)

Structured Taxonomy Induction | Each factor F has an associated scoring function W, with the probability of a total assignment determined by the product of all these scores: |

Structured Taxonomy Induction | We score each edge by extracting a set of features f (55¢, :33) and weighting them by the (learned) weight vector w. So, the factor scoring function is: |

Structured Taxonomy Induction | The scoring function is similar to the one above: |

scoring function is mentioned in 5 sentences in this paper.

Topics mentioned in this paper:

- WordNet (18)
- spanning tree (10)
- n-grams (8)