Index of papers in Proc. ACL 2008 that mention
  • CRFs
Ding, Shilin and Cong, Gao and Lin, Chin-Yew and Zhu, Xiaoyan
Abstract
In this paper, we propose a general framework based on Conditional Random Fields ( CRFs ) to detect the contexts and answers of questions from forum threads.
Abstract
We improve the basic framework by Skip—chain CRFs and 2D CRFs to better accommodate the features of forums for better performance.
Context and Answer Detection
We first discuss using Linear CRFs for context and answer detection, and then extend the basic framework to Skip-chain CRFs and 2D CRFs to better model our problem.
Context and Answer Detection
3.1 Using Linear CRFs
Context and Answer Detection
For ease of presentation, we focus on detecting contexts using Linear CRFs .
Introduction
First, we employ Linear Conditional Random Fields ( CRFs ) to identify contexts and answers, which can capture the relationships between contiguous sentences.
Introduction
We also extend the basic model to 2D CRFs to model dependency between contiguous questions in a forum thread for context and answer identification.
Introduction
Experimental results show that 1) Linear CRFs outperform SVM and decision tree in both context and answer detection; 2) Skip-chain CRFs outperform Linear CRFs for answer finding, which demonstrates that context improves answer finding; 3) 2D CRF model improves the performance of Linear CRFs and the combination of 2D CRFs and Skip-chain CRFs achieves better performance for context detection.
CRFs is mentioned in 45 sentences in this paper.
Topics mentioned in this paper:
Nomoto, Tadashi
A Sentence Trimmer with CRFs
Our idea on how to make CRFs comply with grammar is quite simple: we focus on only those label sequences that are associated with grammatically correct compressions, by making CRFs look at only those that comply with some grammatical constraints G, and ignore others, regardless of how probable they are.1 But how do we find compressions that are grammatical?
A Sentence Trimmer with CRFs
1Assume as usual that CRFs take the form, p(le) 0< eX10 21m- )‘jfj(yk7yk—17X)+ Z,- Migi($k7ykax))
A Sentence Trimmer with CRFs
(1) fj and g, are ‘features’ associated with edges and vertices, respectively, and k: E C, where C denotes a set of cliques in CRFs .
Abstract
The paper presents a novel sentence trimmer in Japanese, which combines a non-statistical yet generic tree generation model and Conditional Random Fields ( CRFs ), to address improving the grammaticality of compression while retaining its relevance.
Features in CRFs
We use an array of features in CRFs which are either derived or borrowed from the taxonomy that a Japanese tokenizer called JUMAN and KNP,6 a Japanese dependency parser (aka Kurohashi-Nagao Parser), make use of in characterizing the output they produce: both JUMAN and KNP are part of the compression model we build.
Introduction
What sets this work apart from them, however, is a novel use we make of Conditional Random Fields ( CRFs ) to select among possible compressions (Lafferty et al., 2001; Sutton and McCallum, 2006).
Introduction
An obvious benefit of using CRFs for sentence compression is that the model provides a general (and principled) probabilistic framework which permits information from various sources to be integrated towards compressing sentence, a property K&M do not share.
Introduction
Nonetheless, there is some cost that comes with the straightforward use of CRFs as a discriminative classifier in sentence compression; its outputs are often ungrammatical and it allows no control over the length of compression they generates (Nomoto, 2007).
CRFs is mentioned in 15 sentences in this paper.
Topics mentioned in this paper:
Kazama, Jun'ichi and Torisawa, Kentaro
Experiments
We think this shows one of the strengths of machine learning methods such as CRFs .
Gazetteer Induction 2.1 Induction by MN Clustering
We expect that tagging models ( CRFs in our case) can learn an appropriate weight for each gazetteer match regardless of whether it is an NE or not.
Related Work and Discussion
Using models such as Semi-Markov CRFs (Sarawagi and Cohen, 2004), which handle the features on overlapping regions, is one possible direction.
Using Gazetteers as Features of NER
The NER task is then treated as a tagging task, which assigns IOB tags to each character in a sentence.10 We use Conditional Random Fields ( CRFs ) (Lafferty et al., 2001) to perform this tagging.
CRFs is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: