Abstract | One of the main obstacles to producing high quality joint models is the lack of jointly annotated data. |
Abstract | Joint modeling of multiple natural language processing tasks outperforms single-task models learned from the same data, but still under-performs compared to single-task models learned on the more abundant quantities of available single-task annotated data. |
Abstract | In this paper we present a novel model which makes use of additional single-task annotated data to improve the performance of a joint model . |
Introduction | Joint models can be particularly useful for producing analyses of sentences which are used as input for higher-level, more semantically-oriented systems, such as question answering and machine translation. |
Introduction | However, designing joint models which actually improve performance has proven challenging. |
Introduction | There have been some recent successes with joint modeling . |
A Distributional Model for Argument Classification | 3.2 A Joint Model for Argument Classification |
Related Work | It incorporates strong dependencies within a comprehensive statistical joint model with a rich set of features over multiple argument phrases. |
Related Work | First local models are applied to produce role labels over individual arguments, then the joint model is used to decide the entire argument sequence among the set of the n-best competing solutions. |
Previous Work | It also focuses on jointly modeling the generation of both predicate and argument, and evaluation is performed on a set of human-plausibility judgments obtaining impressive results against Keller and Lapata’s (2003) Web hit-count based system. |
Topic Models for Selectional Prefs. | One weakness of IndependentLDA is that it doesn’t jointly model a1 and a2 together. |
Topic Models for Selectional Prefs. | On the one hand, J ointLDA jointly models the generation of both arguments in an extracted tuple. |