Index of papers in Proc. ACL 2011 that mention
  • CRF
Benson, Edward and Haghighi, Aria and Barzilay, Regina
Evaluation
+LowThresh +CRF +List *Our Work +Our Work+Con
Evaluation
The CRF and hard-constrained consensus lines terminate because of low record yield.
Evaluation Setup
+LowThresh +CRF +List -)'(-OurWork
Evaluation Setup
The CRF lines terminate because of low record yield.
Evaluation Setup
Our List Baseline labels messages by finding string overlaps against a list of musical artists and venues scraped from web data (the same lists used as features in our CRF component).
Inference
Since a uniform initialization of all factors is a saddle-point of the objective, we opt to initialize the q(y) factors with the marginals obtained using just the CRF parameters, accomplished by running forwards-backwards on all messages using only the
Inference
To do so, we run the CRF component of our model (ngEQ) over the corpus and extract, for each 6, all spans that have a token-level probability of being labeled 6 greater than A = 0.1.
Introduction
We bias local decisions made by the CRF to be consistent with canonical record values, thereby facilitating consistency within an event cluster.
Model
The sequence labeling factor is similar to a standard sequence CRF (Lafferty et al., 2001), Where the potential over a message label sequence decomposes
Model
The weights of the CRF component of our model, QSEQ, are the only weights learned at training time, using a distant supervision process described in Section 6.
CRF is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Clifton, Ann and Sarkar, Anoop
Models 2.1 Baseline Models
A conditional random field ( CRF ) (Lafferty et al., 2001) defines the conditional probability as a linear score for each candidate y and a global normalization term:
Models 2.1 Baseline Models
However, the output 31* from the CRF decoder is still only a sequence of abstract suffix tags.
Models 2.1 Baseline Models
The abstract suffix tags are extracted from the unsupervised morpheme learning process, and are carefully designed to enable CRF training and decoding.
CRF is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
LIU, Xiaohua and ZHANG, Shaodian and WEI, Furu and ZHOU, Ming
Abstract
We propose to combine a K-Nearest Neighbors (KNN) classifier with a linear Conditional Random Fields ( CRF ) model under a semi-supervised learning framework to tackle these challenges.
Abstract
The KNN based classifier conducts pre-labeling to collect global coarse evidence across tweets while the CRF model conducts sequential labeling to capture fine-grained information encoded in a tweet.
Introduction
Following the two-stage prediction aggregation methods (Krishnan and Manning, 2006), such pre-labeled results, together with other conventional features used by the state-of-the-art NER systems, are fed into a linear Conditional Random Fields ( CRF ) (Lafferty et al., 2001) model, which conducts fine-grained tweet level NER.
Introduction
Furthermore, the KNN and CRF model are repeatedly retrained with an incrementally augmented training set, into which high confidently labeled tweets are added.
Introduction
Indeed, it is the combination of KNN and CRF under a semi-supervised learning framework that differentiates ours from the existing.
Related Work
(2010) use Amazons Mechanical Turk service 2 and CrowdFlower 3 to annotate named entities in tweets and train a CRF model to evaluate the effectiveness of human labeling.
Related Work
To achieve this, a KNN classifier with a CRF model is combined to leverage cross tweets information, and the semi-supervised learning is adopted to leverage unlabeled tweets.
Related Work
(2005) use CRF to train a sequential NE labeler, in which the BIO (meaning Beginning, the Inside and the Outside of
CRF is mentioned in 35 sentences in this paper.
Topics mentioned in this paper:
Bendersky, Michael and Croft, W. Bruce and Smith, David A.
Experiments
The CRF model training in line (6) of the algorithm is implemented using CRF++ toolkit3.
Joint Query Annotation
Accordingly, we can directly use a superv1sed sequential probabilistic model such as CRF (Lafferty
Joint Query Annotation
In this CRF
Joint Query Annotation
It then produces a set of independent annotation estimates, which are jointly used, together with the ground truth annotations, to learn a CRF model for each annotation type.
CRF is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Chen, Harr and Benson, Edward and Naseem, Tahira and Barzilay, Regina
Results
Comparison against Supervised CRF Our final set of experiments compares a semi-supervised version of our model against a conditional random field ( CRF ) model.
Results
The CRF model was trained using the same features as our model’s argument features.
Results
At the sentence level, our model compares very favorably to the supervised CRF .
CRF is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Nagata, Ryo and Whittaker, Edward and Sheinman, Vera
UK and XP stand for unknown and X phrase, respectively.
6“CRFTagger: CRF English POS Tagger,” Xuan-Hieu Phan, http: //crftagger .
UK and XP stand for unknown and X phrase, respectively.
Method Native Corpus Learner Corpus CRF 0.970 0.932 HMM 0.887 0.926
UK and XP stand for unknown and X phrase, respectively.
HMM CRF POS Freq.
CRF is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: