Index of papers in Proc. ACL 2010 that mention
  • precision and recall
Ravi, Sujith and Baldridge, Jason and Knight, Kevin
Conclusion
Table 6: Comparison of grammar/lexicon observed in the model tagging vs. gold tagging in terms of precision and recall measures for supertagging on CCG—TUT.
Experiments
Precision and recall of grammar and lexicon.
Experiments
Table 3: Comparison of grammar/lexicon observed in the model tagging vs. gold tagging in terms of precision and recall measures for supertagging on CCGbank data.
Experiments
We can obtain a more-fine grained understanding of how the models differ by considering the precision and recall values for the grammars and lexicons of the different models, given in Table 3.
precision and recall is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Park, Keun Chan and Jeong, Yoonjae and Myaeng, Sung Hyon
Conclusion and Future Work
For experience detection, the performance was very promising, closed to 92% in precision and recall when all the features were used.
Experience Detection
We not only compared our results with the baseline in terms of precision and recall but also
Experience Detection
The performance for the best case with all the features included is very promising, closed to 92% precision and recall .
Experience Detection
In order to see the effect of including individual features in the feature set, precision and recall were measured after eliminating a particular feature from the full set.
Lexicon Construction
Note that the precision and recall are macro-averaged values across the two classes, activity and state.
precision and recall is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Hoffmann, Raphael and Zhang, Congle and Weld, Daniel S.
Conclusion
Many researchers are trying to use IE to create large-scale knowledge bases from natural language text on the Web, but existing relation-specific techniques do not scale to the thousands of relations encoded in Web text — while relation-independent techniques suffer from lower precision and recall , and do not canonicalize the relations.
Extraction with Lexicons
We expect that lists with higher similarity are more likely to contain phrases which are related to our seeds; hence, by varying the similarity threshold one may produce lexicons representing different compromises between lexicon precision and recall .
Introduction
Open extraction is more scalable, but has lower precision and recall .
Related Work
Open IE, self-supervised learning of unlexicalized, relation-independent extractors (Banko et al., 2007), is a more scalable approach, but suffers from lower precision and recall , and doesn’t canonicalize the relations.
Related Work
The goal of set expansion techniques is to generate high precision sets of related items; hence, these techniques are evaluated based on lexicon precision and recall .
precision and recall is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Huang, Fei and Yates, Alexander
Introduction
do not report (NR) separate values for precision and recall on this dataset.
Introduction
Differences in both precision and recall between the baseline and the other systems are statistically significant at p < 0.01 using the two-tailed Fisher’s exact test.
Introduction
Differences in both precision and recall between the baseline and the Span-HMM systems are statistically significant at p < 0.01 using the two-tailed Fisher’s exact test.
precision and recall is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Wu, Fei and Weld, Daniel S.
Abstract
This paper presents WOE, an open IE system which improves dramatically on TextRunner’s precision and recall .
Abstract
WOE can operate in two modes: when restricted to P08 tag features, it runs as quickly as TextRunner, but when set to use dependency-parse features its precision and recall rise even higher.
Introduction
high precision and recall , they are limited by the availability of training data and are unlikely to scale to the thousands of relations found in text on the Web.
Introduction
WOE can operate in two modes: when restricted to shallow features like part-of-speech (POS) tags, it runs as quickly as Textrunner, but when set to use dependency-parse features its precision and recall rise even higher.
precision and recall is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Elson, David and Dames, Nicholas and McKeown, Kathleen
Extracting Conversational Networks from Literature
The precision and recall of our method for detecting conversations is shown in Table 2.
Extracting Conversational Networks from Literature
To calculate precision and recall for the two baseline social networks, we set a threshold 75 to derive a binary prediction from the continuous edge weights.
Extracting Conversational Networks from Literature
The precision and recall values shown for the baselines in Table 2 represent the highest performance we achieved by varying t between 0 and 1 (maximizing F-measure over 25).
precision and recall is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Ritter, Alan and Mausam and Etzioni, Oren
Experiments
In figure 5 we compare the precision and recall of LDA-SP against the top two performing systems described by Pantel et al.
Experiments
We find that LDA-SP achieves both higher precision and recall than ISP.IIM-\/.
Experiments
Figure 5: Precision and recall on the inference filtering task.
precision and recall is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Spiegler, Sebastian and Flach, Peter A.
Experiments and Results
It seems that the model gets quickly saturated in terms of incorporating new information and therefore precision and recall do not drastically change for increasing dataset sizes.
Experiments and Results
For this reason we broke down the summary measures of precision and recall into their original components: true/false positive (TP/FF) and negative (TN/FN) counts presented in the 2 x 2 contingency table of Figure 1.
Experiments and Results
The optimal solution applying [1* = 0.38 is more balanced between precision and recall and
precision and recall is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: