Index of papers in Proc. ACL 2014 that mention
  • ground truth
Anzaroot, Sam and Passos, Alexandre and Belanger, David and McCallum, Andrew
Citation Extraction Data
Our score compares how often the constraint is Violated in the ground truth examples versus our predictions.
Citation Extraction Data
Note that it may make sense to consider a constraint that is sometimes Violated in the ground truth , as the penalty learning algorithm can learn a small penalty for it, which
Citation Extraction Data
Here, yd denotes the ground truth labeling and wd is the vector of scores for the CRF tagger.
Soft Constraints in Dual Decomposition
Intuitively, the perceptron update increases the penalty for a constraint if it is satisfied in the ground truth and not in an inferred prediction, and decreases the penalty if the constraint is satisfied in the prediction and not the ground truth .
Soft Constraints in Dual Decomposition
Since we truncate penalties at 0, this suggests that we will learn a penalty of 0 for constraints in three categories: constraints that do not hold in the ground truth, constraints that hold in the ground truth but are satisfied in practice by performing inference in the base CRF model, and constraints that are satisfied in practice as a side-effect of imposing nonzero penalties on some other constraints .
ground truth is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Martineau, Justin and Chen, Lu and Cheng, Doreen and Sheth, Amit
Experiments
Table 2: Ground truth annotatior
Experiments
After that, the same dataset was annotated independently by a group of expert annotators to create the ground truth .
Experiments
We first describe the AMT annotation and ground truth annotation, and then discuss the baselines and experimental results.
ground truth is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Li, Jiwei and Ritter, Alan and Hovy, Eduard
Conclusion and Future Work
Facebook would an ideal ground truth knowledge base.
Dataset Creation
To obtain ground truth for the spouse relation at large scale, we turned to Freebase“, a large, open-domain database, and gathered instances of the /PEOPLE/PERSON/SPOUSE relation.
Introduction
Inspired by the concept of distant supervision, we collect training tweets by matching attribute ground truth from an outside “knowledge base” such as Facebook or Google Plus.
Introduction
We are optimistic that our approach can easily be applied to further user attributes such as HOBBIES and INTERESTS (MOVIES, BOOKS, SPORTS or STARS), RELIGION, HOMETOWN, LIVING LOCATION, FAMILY MEMBERS and so on, where training data can be obtained by matching ground truth retrieved from multiple types of online social media such as Facebook, Google Plus, or LinkedIn.
Related Work
Distant Supervision Distant supervision, also known as weak supervision, is a method for leam-ing to extract relations from text using ground truth from an existing database as a source of supervision.
ground truth is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Fyshe, Alona and Talukdar, Partha P. and Murphy, Brian and Mitchell, Tom M.
Experimental Results
To test if our joint model of Brain+Text is closer to semantic ground truth we compared the latent representation A learned via JNNSE(Brain+Text) or NNSE(Text) to an independent behavioral measure of semantics.
Future Work and Conclusion
We have provided evidence that the latent representations are closer to the neural representation of semantics, and possibly, closer to semantic ground truth .
Introduction
If brain activation data encodes semantics, we theorized that including brain data in a model of semantics could result in a model more consistent with semantic ground truth .
Introduction
Our results are evidence that a joint model of brain- and text-based semantics may be closer to semantic ground truth than text-only models.
ground truth is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Pershina, Maria and Min, Bonan and Xu, Wei and Grishman, Ralph
Guided DS
Our goal is to jointly model human-labeled ground truth and structured data from a knowledge base in distant supervision.
Guided DS
The input to the model consists of (l) distantly supervised data, represented as a list of n bags1 with a vector yi of binary gold-standard labels, either Positive(P) or N egative(N ) for each relation TER; (2) generalized human-labeled ground truth , represented as a set G of feature conjunctions g={gi|i=l,2,3} associated with a unique relation r(g).
Guided DS
We introduce a set of latent variables hi which model human ground truth for each mention in the ith bag and take precedence over the current model assignment zi.
ground truth is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhang, Yuan and Lei, Tao and Barzilay, Regina and Jaakkola, Tommi and Globerson, Amir
Results
The Effect of Constraints in Learning Our training method updates parameters to satisfy the pairwise constraints between (1) subsequent samples on the sampling path and (2) selected samples and the ground truth .
Results
“Neighbor” corresponds to pairwise constraints between subsequent samples, “Gold” represents constraints between a single sample and the ground truth , “Both” means applying both types of constraints.
Sampling-Based Dependency Parsing with Global Features
A training set of size N is given as a set of pairs D = {(5507, gawk-1:1 Where ya) is the ground truth parse for sentence ma).
ground truth is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: