Index of papers in Proc. ACL 2012 that mention
  • tree kernels
Croce, Danilo and Moschitti, Alessandro and Basili, Roberto and Palmer, Martha
Abstract
Then, we design advanced similarity functions between such structures, i.e., semantic tree kernel functions, for exploiting distributional and grammatical information in Support Vector Machines.
Model Analysis and Discussion
available; and (ii) in 76% of the errors only 2 or less argument heads are included in the extracted tree, therefore tree kernels cannot exploit enough lexical information to disambiguate verb senses.
Model Analysis and Discussion
the capability of tree kernels to implicitly trigger useful linguistic inductions for complex semantic tasks.
Related work
Recently, DMs have been also proposed in integrated syntacticsemantic structures that feed advanced learning functions, such as the semantic tree kernels discussed in (Bloehdorn and Moschitti, 2007a; Bloehdorn and Moschitti, 2007b; Mehdad et al., 2010; Croce et al., 2011).
Structural Similarity Functions
In particular, we design new models for verb classification by adopting algorithms for structural similarity, known as Smoothed Partial Tree Kernels (SPTKs) (Croce et al., 2011).
Structural Similarity Functions
3.2 Tree Kernels driven by Semantic Similarity To our knowledge, two main types of tree kernels exploit lexical similarity: the syntactic semantic tree kernel defined in (Bloehdom and Moschitti, 2007a) applied to constituency trees and the smoothed partial tree kernels (SPTKs) defined in (Croce et al., 2011), which generalizes the former.
Structural Similarity Functions
The A function determines the richness of the kernel space and thus induces different tree kernels, for example, the syntactic tree kernel (STK) (Collins and Duffy, 2002) or the partial tree kernel (PTK) (Moschitti, 2006).
Verb Classification Models
Here, we apply tree pruning to reduce the computational complexity of tree kernels as it is proportional to the number of nodes in the input trees.
Verb Classification Models
To encode dependency structure information in a tree (so that we can use it in tree kernels ), we use (i) lexemes as nodes of our tree, (ii) their dependencies as edges between the nodes and (iii) the dependency labels, e. g., grammatical functions (GR), and POS-Tags, again as tree nodes.
tree kernels is mentioned in 9 sentences in this paper.
Topics mentioned in this paper: