Index of papers in Proc. ACL 2014 that mention
  • CCG
Krishnamurthy, Jayant and Mitchell, Tom M.
Introduction
Syntactic information is provided by CCGbank, a conversion of the Penn Treebank into the CCG formalism (Hockenmaier and Steedman, 2002a).
Parser Design
This section describes the Combinatory Categorial Grammar ( CCG ) parsing model used by ASP.
Parser Design
The input to the parser is a part-of-speech tagged sentence, and the output is a syntactic CCG parse tree, along with zero or more logical forms representing the semantics of subspans of the sentence.
Parser Design
ASP uses a lexicalized and semantically-typed Combinatory Categorial Grammar ( CCG ) (Steedman, 1996).
Prior Work
This paper combines two lines of prior work: broad coverage syntactic parsing with CCG and semantic parsing.
Prior Work
Broad coverage syntactic parsing with CCG has produced both resources and successful parsers.
Prior Work
These parsers are trained and evaluated using CCGbank (Hockenmaier and Steedman, 2002a), an automatic conversion of the Penn Treebank into the CCG formalism.
CCG is mentioned in 15 sentences in this paper.
Topics mentioned in this paper:
Lee, Kenton and Artzi, Yoav and Dodge, Jesse and Zettlemoyer, Luke
Detection
Algorithm The detection algorithm considers all phrases that our CCG grammar A (Section 4) can parse, uses a learned classifier to further filter this set, and finally resolves conflicts between any overlapping predictions.
Formal Overview
We use a log-linear CCG (Steedman, 1996; Clark and Curran, 2007) to rank possible meanings z E Z for each mention m in a document D, as described in Section 4.
Introduction
For both tasks, we make use of a hand-engineered Combinatory Categorial Grammar ( CCG ) to construct a set of meaning representations that identify the time being described.
Introduction
For the relatively closed-class time expressions, we demonstrate that it is possible to engineer a high quality CCG lexicon.
Parsing Time Expressions
First, we use a CCG to generate an initial logical form for the mention.
Parsing Time Expressions
Figure l: A CCG parse tree for the mention “one week ago.” The tree includes forward (>) and backward (<) application, as well as two type-shifting operations
Parsing Time Expressions
CCG is a linguistically motivated categorial formalism for modeling a wide range of language phenomena (Steedman, 1996; Steedman, 2000).
CCG is mentioned in 19 sentences in this paper.
Topics mentioned in this paper:
Xu, Wenduan and Clark, Stephen and Zhang, Yue
Abstract
This paper presents the first dependency model for a shift-reduce CCG parser.
Abstract
Modelling dependencies is desirable for a number of reasons, including handling the “spurious” ambiguity of CCG; fitting well with the theory of CCG ; and optimizing for structures which are evaluated at test time.
Abstract
Standard CCGBank tests show the model achieves up to 1.05 labeled F-score improvements over three existing, competitive CCG parsing models.
Introduction
Combinatory Categorial Grammar ( CCG ; Steedman (2000)) is able to derive typed dependency structures (Hockenmaier, 2003; Clark and Curran, 2007), providing a useful approximation to the underlying predicate-argument relations of “who did what to whom”.
Introduction
To date, CCG remains the most competitive formalism for recovering “deep” dependencies arising from many linguistic phenomena such as raising, control, extraction and coordination (Rimell et al., 2009; Nivre et al., 2010).
Introduction
To achieve its expressiveness, CCG exhibits so-called “spurious” ambiguity, permitting many nonstandard surface derivations which ease the recovery of certain dependencies, especially those arising from type-raising and composition.
CCG is mentioned in 31 sentences in this paper.
Topics mentioned in this paper:
Sun, Weiwei and Du, Yantao and Kou, Xin and Ding, Shuoyang and Wan, Xiaojun
Experiments
This difference is consistent with the result obtained by a shift-reduce CCG parser (Zhang and Clark, 2011).
Experiments
CCG and HP SG parsers also favor the dependency-based metrics for evaluation (Clark and Curran, 2007b; Miyao and Tsujii, 2008).
Experiments
Previous work on Chinese CCG and HP SG parsing unanimously agrees that obtaining the deep analysis of Chinese is more challenging (Yu et al., 2011; Tse and Curran, 2012).
Related Work
CCG , HP SG, LFG and TAG, provides valuable, richer linguistic information, and researchers thus draw more and more attention to it.
Related Work
Phrase structure trees in CTB have been semiautomatically converted to deep derivations in the CCG (Tse and Curran, 2010), LFG (Guo et al., 2007), TAG (Xia, 2001) and HPSG (Yu et al., 2010) formalisms.
CCG is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Duan, Manjuan and White, Michael
Analysis and Discussion
Given that with the more refined SVM ranker, the Berkeley parser worked nearly as well as all three parsers together using the complete feature set, the prospects for future work on a more realistic scenario using the OpenCCG parser in an SVM ranker for self-monitoring now appear much more promising, either using OpenCCG’s reimplemen-tation of Hockenmaier & Steedman’s generative CCG model, or using the Berkeley parser trained on OpenCCG’s enhanced version of the CCGbank, along the lines of Fowler and Penn (2010).
Background
In the figure, nodes correspond to discourse referents labeled with lexical predicates, and dependency relations between nodes encode argument structure (gold standard CCG lexical categories are also shown); note that semantically empty function words such as infinitival-to are missing.
Background
The model takes as its starting point two probabilistic models of syntax that have been developed for CCG parsing, Hockenmaier & Steed-man’s (2002) generative model and Clark & Cur-ran’s (2007) normal-form model.
Related Work
Approaches to surface realization have been developed for LFG, HPSG, and TAG, in addition to CCG , and recently statistical dependency-based approaches have been developed as well; see the report from the first surface realization shared
CCG is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Gormley, Matthew R. and Mitchell, Margaret and Van Durme, Benjamin and Dredze, Mark
Experiments
7The comparison is imperfect for two reasons: first, the CCGBank contains only 99.44% of the original PTB sentences (Hockenmaier and Steedman, 2007); second, because PropB ank was annotated over CFGs, after converting to CCG only 99.977% of the argument spans were exact matches (Boxwell and White, 2008).
Experiments
(2011) (B’11) uses additional supervision in the form of a CCG tag dictionary derived from supervised data with (tdc) and without (tc) a cutoff.
Related Work
(2011) describe a method for training a semantic role labeler by extracting features from a packed CCG parse chart, where the parse weights are given by a simple ruleset.
Related Work
(2011) require an oracle CCG tag dictionary extracted from a treebank.
CCG is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: