Abstract | This model leverages the CCG combinatory operators to guide a nonlinear transformation of meaning within a sentence. |
Background | In this paper we focus on CCG , a linguistically expressive yet computationally efficient grammar formalism. |
Background | CCG relies on combinatory logic (as opposed to lambda calculus) to build its expressions. |
Background | CCG has been described as having a transparent surface between the syntactic and the seman- |
Introduction | in this field includes the Combinatory Categorial Grammar ( CCG ), which also places increased emphasis on syntactic coverage (Szabolcsi, 1989). |
Introduction | We achieve this goal by employing the CCG formalism to consider compositional structures at any point in a parse tree. |
Introduction | CCG is attractive both for its transparent interface between syntax and semantics, and a small but powerful set of combinatory operators with which we can parametrise our nonlinear transformations of compositional meaning. |
Abstract | This paper describes a method of inducing wide-coverage CCG resources for Japanese. |
Abstract | Our method first integrates multiple dependency-based corpora into phrase structure trees and then converts the trees into CCG derivations. |
Introduction | combinatory categorial grammar ( CCG ) (Steedman, 2001). |
Introduction | Our work is basically an extension of a seminal work on CCGbank (Hockenmaier and Steedman, 2007), in which the phrase structure trees of the Penn Treebank (PTB) (Marcus et al., 1993) are converted into CCG derivations and a wide-coverage CCG lexicon is then extracted from these derivations. |
Introduction | Moreover, the relation between chunk-based dependency structures and CCG derivations is not obvious. |
Abstract | We are interested in parsing constituency-based grammars such as HPSG and CCG using a small amount of data specific for the target formalism, and a large quantity of coarse CFG annotations from the Penn Treebank. |
Abstract | We evaluate our approach on three constituency-based grammars — CCG , HPSG, and LPG, augmented with the Penn Treebank—l. |
Introduction | A natural candidate for such coarse annotations is context-free grammar (CFG) from the Penn Treebank, while the target formalism can be any constituency-based grammars, such as Combinatory Categorial Grammar ( CCG ) (Steedman, 2001), Lexical Functional Grammar (LFG) (Bresnan, 1982) or Head-Driven Phrase Structure Grammar (HPSG) (Pollard and Sag, 1994). |
Introduction | We evaluate our approach on three constituency-based grammars — CCG , HPSG, and LPG. |
Introduction | S CFG Sldcll CCG |
Related Work | There have been several attempts to map annotations in coarse grammars like CFG to annotations in richer grammar, like HPSG, LFG, or CCG . |
Related Work | For instance, Hockenmaier and Steedman (2002) made thousands of POS and constituent modifications to the Penn Treebank to facilitate transfer to CCG . |
The Learning Problem | Recall that our goal is to learn how to parse the target formalisms while using two annotated sources: a small set of sentences annotated in the target formalism (e.g., CCG ), and a large set of sentences with coarse annotations. |
The Learning Problem | For simplicity we focus on the CCG formalism in what follows. |
Extending a Semantic Parser Using a Schema Alignment | Using a fixed CCG grammar and a procedure based on unification in second-order logic, UBL learns a lexicon A from the training data which includes entries like: |
Extending a Semantic Parser Using a Schema Alignment | Example CCG Grammar Rules |
Previous Work | technique does require manual specification of rules that construct CCG lexical entries from dependency parses. |
Previous Work | In comparison, we fully automate the process of constructing CCG lexical entries for the semantic parser by making it a prediction task. |