Index of papers in Proc. ACL 2014 that mention
  • tree kernel
Lin, Chen and Miller, Timothy and Kho, Alvin and Bethard, Steven and Dligach, Dmitriy and Pradhan, Sameer and Savova, Guergana
Abstract
Convolution tree kernels are an efficient and effective method for comparing syntactic structures in NLP methods.
Abstract
However, current kernel methods such as subset tree kernel and partial tree kernel understate the similarity of very similar tree structures.
Background
2.1 Syntax-based Tree Kernels
Background
Syntax-based tree kernels quantify the similarity between two constituent parses by counting their common substructures.
Background
The partial tree kernel (PTK) relaxes the definition of subtrees to allow partial production rule
Introduction
Convolution kernels over syntactic trees ( tree kernels ) offer a potential solution to this problem by providing relatively efficient algorithms for computing similarities between entire discrete structures.
Introduction
However, conventional tree kernels are sensitive to pattern variations.
Introduction
For example, two trees in Figure 1(a) sharing the same structure except for one terminal symbol are deemed at most 67% similar by the conventional tree kernel (PTK) (Moschitti, 2006).
Methods
Compared with the previous tree kernels , our descending path kernel has the following advantages: l) the substructures are simplified so that they are more likely to be shared among trees, and therefore the sparse feature issues of previous kernels could be alleviated by this representation; 2) soft matching between two similar structures (e.g., NP—>DT JJ NN versus NP—>DT NN) have high similarity without reference to any corpus or grammar rules;
tree kernel is mentioned in 18 sentences in this paper.
Topics mentioned in this paper:
Sun, Le and Han, Xianpei
Abstract
Tree kernel is an effective technique for relation extraction.
Abstract
In this paper, we propose a new tree kernel, called feature-enriched tree kernel (F TK ), which can enhance the traditional tree kernel by: 1) refining the syntactic tree representation by annotating each tree node with a set of discriminant features; and 2) proposing a new tree kernel which can better measure the syntactic tree similarity by taking all features into consideration.
Abstract
Experimental results show that our method can achieve a 5.4% F—measure improvement over the traditional convolution tree kernel .
Introduction
An effective technique is the tree kernel (Zelenko et al., 2003; Zhou et al., 2007; Zhang et al., 2006; Qian et al., 2008), which can exploit syntactic parse tree information for relation extraction.
Introduction
Then the similarity between two trees are computed using a tree kernel, e. g., the convolution tree kernel proposed by Collins and Duffy (2001).
Introduction
Unfortunately, one main shortcoming of the traditional tree kernel is that the syntactic tree representation usually cannot accurately capture the
tree kernel is mentioned in 28 sentences in this paper.
Topics mentioned in this paper:
Severyn, Aliaksei and Moschitti, Alessandro and Uryupina, Olga and Plank, Barbara and Filippova, Katja
Abstract
We rely on the tree kernel technology to automatically extract and learn features with better generalization power than bag-of-words.
Introduction
In particular, we define an efficient tree kernel derived from the Partial Tree Kernel , (Moschitti, 2006a), suitable for encoding structural representation of comments into Support Vector Machines (SVMs).
Representations and models
These trees are input to tree kernel functions for generating structural features.
Representations and models
The latter are automatically generated and learned by SVMs with expressive tree kernels .
Representations and models
In other words, the tree fragment: [S [negative—VP [negative—V [destroy] ] [PRODUCT-NP [PRODUCT-N [xoom] ] ] ] is a strong feature (induced by tree kernels ) to help the classifier to discriminate such hard cases.
tree kernel is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Kim, Seokhwan and Banchs, Rafael E. and Li, Haizhou
Evaluation
the domain context tree kernel contributed to produce more precise outputs.
Introduction
Our composite kernel consists of a history sequence and a domain context tree kernels , both of which are composed based on similar textual units in Wikipedia articles to a given dialog context.
Wikipedia-based Composite Kernel for Dialog Topic Tracking
Our composite kernel consists of two different kernels: a history sequence kernel and a domain context tree kernel .
Wikipedia-based Composite Kernel for Dialog Topic Tracking
3.2 Domain Context Tree Kernel
Wikipedia-based Composite Kernel for Dialog Topic Tracking
Since this constructed tree structure represents semantic, discourse, and structural information extracted from the similar Wikipedia paragraphs to each given instance, we can explore these more enriched features to build the topic tracking model using a subset tree kernel (Collins and Duffy, 2002) which computes the similarity between each pair of trees in the feature space as follows:
tree kernel is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Wang, Chang and Fan, James
Background
Many of them focus on using tree kernels to learn parse tree structure related features (Collins and Duffy, 2001; Culotta and Sorensen, 2004; Bunescu and Mooney, 2005).
Background
For example, by combining tree kernels and convolution string kernels, (Zhang et al., 2006) achieved the state of the art performance on ACE data (ACE, 2004).
Experiments
We compare our approaches to three state-of-the-art approaches including SVM with convolution tree kernels (Collins and Duffy, 2001), linear regression and SVM with linear kernels (Scholkopf and Smola, 2002).
Experiments
To adapt the tree kernel to medical domain, we followed the approach in (Nguyen et al., 2009) to take the syntactic structures into consideration.
Experiments
We also added the argument types as features to the tree kernel .
tree kernel is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Chen, Yanping and Zheng, Qinghua and Zhang, Wei
Feature Construction
In this field, the tree kernel based method commonly uses the parse tree to capture the structure information (Zelenko et al., 2003; Culotta and Sorensen, 2004).
Related Work
(2012) proposed a convolution tree kernel .
Related Work
(2010) employed a model, combining both the feature based and the tree kernel based methods.
Related Work
(2008; 2010) also pointed out that, due to the inaccuracy of Chinese word segmentation and parsing, the tree kernel based approach is inappropriate for Chinese relation extraction.
tree kernel is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Srivastava, Shashank and Hovy, Eduard
Introduction
2.2 Tree kernels
Introduction
Tree Kernel methods have gained popularity in the last decade for capturing syntactic information in the structure of parse trees (Collins and Duffy, 2002; Moschitti, 2006).
Introduction
(2013) have attempted to provide formulations to incorporate semantics into tree kernels through the use of distributional word vectors at the individual word-nodes.
tree kernel is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Guzmán, Francisco and Joty, Shafiq and Màrquez, Llu'is and Nakov, Preslav
Our Discourse-Based Measures
A number of metrics have been proposed to measure the similarity between two labeled trees, e. g., Tree Edit Distance (Tai, 1979) and Tree Kernels (Collins and Duffy, 2001; Moschitti and Basili, 2006).
Our Discourse-Based Measures
Tree kernels (TKs) provide an effective way to integrate arbitrary tree structures in kernel-based machine learning algorithms like SVMs.
Our Discourse-Based Measures
the nuclearity and the relations, in order to allow the tree kernel to give partial credit to subtrees that differ in labels but match in their skeletons.
tree kernel is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: