Citation Extraction Data | Our score compares how often the constraint is Violated in the ground truth examples versus our predictions. |
Citation Extraction Data | Note that it may make sense to consider a constraint that is sometimes Violated in the ground truth , as the penalty learning algorithm can learn a small penalty for it, which |
Citation Extraction Data | Here, yd denotes the ground truth labeling and wd is the vector of scores for the CRF tagger. |
Soft Constraints in Dual Decomposition | Intuitively, the perceptron update increases the penalty for a constraint if it is satisfied in the ground truth and not in an inferred prediction, and decreases the penalty if the constraint is satisfied in the prediction and not the ground truth . |
Soft Constraints in Dual Decomposition | Since we truncate penalties at 0, this suggests that we will learn a penalty of 0 for constraints in three categories: constraints that do not hold in the ground truth, constraints that hold in the ground truth but are satisfied in practice by performing inference in the base CRF model, and constraints that are satisfied in practice as a side-effect of imposing nonzero penalties on some other constraints . |
Experiments | Table 2: Ground truth annotatior |
Experiments | After that, the same dataset was annotated independently by a group of expert annotators to create the ground truth . |
Experiments | We first describe the AMT annotation and ground truth annotation, and then discuss the baselines and experimental results. |
Data | 2.3 Ground Truth |
Data | The ground truth for training and evaluating our regression models is formed by voting intention polls from YouGov (UK) and a collection of Austrian pollsters2 — as none performed high frequency polling — for the Austrian case study. |
Experiments | (a) Ground Truth (polls) |
Experiments | Sub-figure 3a is a plot of ground truth as presented in voting inten- |
Experiments | 5 _ + oVP + FPQ + GRU 0 5 10 15 20 25 30 35 40 45 Time (a) Ground Truth (polls) |
Conclusion and Future Work | Facebook would an ideal ground truth knowledge base. |
Dataset Creation | To obtain ground truth for the spouse relation at large scale, we turned to Freebase“, a large, open-domain database, and gathered instances of the /PEOPLE/PERSON/SPOUSE relation. |
Introduction | Inspired by the concept of distant supervision, we collect training tweets by matching attribute ground truth from an outside “knowledge base” such as Facebook or Google Plus. |
Introduction | We are optimistic that our approach can easily be applied to further user attributes such as HOBBIES and INTERESTS (MOVIES, BOOKS, SPORTS or STARS), RELIGION, HOMETOWN, LIVING LOCATION, FAMILY MEMBERS and so on, where training data can be obtained by matching ground truth retrieved from multiple types of online social media such as Facebook, Google Plus, or LinkedIn. |
Related Work | Distant Supervision Distant supervision, also known as weak supervision, is a method for leam-ing to extract relations from text using ground truth from an existing database as a source of supervision. |
Experiments | 4.2 Ground Truth Generation |
Experiments | For ground truth , we consider a bursty topic to be cor-'ect if both human judges have scored it 1. |
Experiments | topics from the ground truth bursty topics. |
Conclusions | 14In some domains, such as medical diagnosis, it makes perfect sense to assume that there is a ground truth . |
Related Work | Although in our case study we have tested our aggregators by comparing their outcomes to a gold standard, our approach to collective annotation itself does not assume that there is in fact a ground truth . |
Related Work | In application domains where it is reasonable to assume the existence of a ground truth and where we are able to model the manner in which individual judgments are being distorted relative to this ground truth, social choice theory provides tools (using again maximum-likelihood estimators) for the design of aggregators that maximise chances of recovering the ground truth for a given model of distortion (Young, 1995; Conitzer and Sandholm, 2005). |
Related Work | Specifically, they have designed an experiment in which the ground truth is defined unambiguously and known to the experiment designer, so as to be able to extract realistic models of distortion from the data collected in a crowdsourcing exercise. |
Experimental Results | To test if our joint model of Brain+Text is closer to semantic ground truth we compared the latent representation A learned via JNNSE(Brain+Text) or NNSE(Text) to an independent behavioral measure of semantics. |
Future Work and Conclusion | We have provided evidence that the latent representations are closer to the neural representation of semantics, and possibly, closer to semantic ground truth . |
Introduction | If brain activation data encodes semantics, we theorized that including brain data in a model of semantics could result in a model more consistent with semantic ground truth . |
Introduction | Our results are evidence that a joint model of brain- and text-based semantics may be closer to semantic ground truth than text-only models. |
Guided DS | Our goal is to jointly model human-labeled ground truth and structured data from a knowledge base in distant supervision. |
Guided DS | The input to the model consists of (l) distantly supervised data, represented as a list of n bags1 with a vector yi of binary gold-standard labels, either Positive(P) or N egative(N ) for each relation TER; (2) generalized human-labeled ground truth , represented as a set G of feature conjunctions g={gi|i=l,2,3} associated with a unique relation r(g). |
Guided DS | We introduce a set of latent variables hi which model human ground truth for each mention in the ith bag and take precedence over the current model assignment zi. |
Results | The Effect of Constraints in Learning Our training method updates parameters to satisfy the pairwise constraints between (1) subsequent samples on the sampling path and (2) selected samples and the ground truth . |
Results | “Neighbor” corresponds to pairwise constraints between subsequent samples, “Gold” represents constraints between a single sample and the ground truth , “Both” means applying both types of constraints. |
Sampling-Based Dependency Parsing with Global Features | A training set of size N is given as a set of pairs D = {(5507, gawk-1:1 Where ya) is the ground truth parse for sentence ma). |