Index of papers in Proc. ACL 2014 that mention
  • confidence score
Liu, Shujie and Yang, Nan and Li, Mu and Zhou, Ming
Introduction
Word embedding is used as the input to learn translation confidence score , which is combined with commonly used features in the conventional log-linear model.
Our Model
yum] is the confidence score of how plausible the parent node should be created.
Our Model
The recurrent input vector film] is concatenated with parent node representation sum] to compute the confidence score yum] .
Phrase Pair Embedding
The one-hot representation vector is used as the input, and a one-hidden-layer network generates a confidence score .
Phrase Pair Embedding
To train the neural network, we add the confidence scores to the conventional log-linear model as features.
Phrase Pair Embedding
We use recurrent neural network to generate two smoothed translation confidence scores based on source and target word embeddings.
Related Work
RNNLM (Mikolov et al., 2010) is firstly used to generate the source and target word embeddings, which are fed into a one-hidden-layer neural network to get a translation confidence score .
Related Work
Together with other commonly used features, the translation confidence score is integrated into a conventional log-linear model.
Related Work
Given the representations of the smaller phrase pairs, recursive auto-encoder can generate the representation of the parent phrase pair with a reordering confidence score .
confidence score is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Wang, Yue and Liu, Xitong and Fang, Hui
Concept-based Representation for Medical Records Retrieval
In particular, MetaMap (Aronson, 2001) can take a text string as the input, segment it into phrases, and then map each phrase to multiple UMLS CUIs with confidence scores .
Concept-based Representation for Medical Records Retrieval
The confidence score is an indicator of the quality of the phrase-to-concept mapping by MetaMap.
Concept-based Representation for Medical Records Retrieval
confidence score as well as more detailed information about this concept.
Experiments
As shown in Equation (3), the Balanced method regularizes the weights through two components: (1) normalized confidence score of each aspect,
Weighting Strategies for Concept-based Representation
Although MetaMap is able to rank all the candidate concepts with the confidence score and pick the most likely one, the accuracy is not very high.
Weighting Strategies for Concept-based Representation
i(e) is the normalized confidence score of the mapping for concept 6 generated by MetaMap.
Weighting Strategies for Concept-based Representation
Since each concept mapping is associated with a confidence score , we can incorporate them into the regularization function as follows:
confidence score is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Chen, Liwei and Feng, Yansong and Huang, Songfang and Qin, Yong and Zhao, Dongyan
Experiments
The preliminary relation extractor of our optimization framework is not limited to the MaxEnt extractor, and can take any sentence level relation extractor with confidence scores .
Experiments
Furthermore, the confidence scores which MultiR outputs are not normalized to the same scale, which brings us difficulties in setting up a confidence threshold to select the candidates.
The Framework
We first train a preliminary sentence level extractor which can output confidence scores for its predictions, e.g., a maximum entropy or logistic regression model, and use this local extractor to produce local predictions.
The Framework
Now the confidence score of a relation 7“ E R75 being assigned to tuple t can be calculated as:
The Framework
The first component is the sum of the original confidence scores of all the selected candidates, and the second one is the sum of the maximal mention-level confidence scores of all the selected candidates.
confidence score is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Alhelbawy, Ayman and Gaizauskas, Robert
Abstract
Each node has an initial confidence score , e.g.
Solution Graph
Each candidate has associated with it an initial confidence score , also detailed below.
Solution Graph
Initial confidence scores of all candidates for a single NE mention are normalized to sum to l.
Solution Graph
One is a setup where a ranking based solely on different initial confidence scores is used
confidence score is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Joshi, Aditya and Mishra, Abhijit and Senthamilselvan, Nivvedan and Bhattacharyya, Pushpak
Conclusion & Future Work
Finally, we observe a negative correlation between the classifier confidence scores and a SAC, as expected.
Discussion
In other words, the goal is to show that the confidence scores of a sentiment classifier are negatively correlated with SAC.
Discussion
The confidence score of a classifier8 for given text t is computed as follows:
Discussion
Table 3 presents the accuracy of the classifiers along with the correlations between the confidence score and observed SAC values.
confidence score is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Wang, Chang and Fan, James
Experiments
The new KB covers all super relations and stores the knowledge in the format of (rela-tionJiame, argument_l, argumentl, confidence), where the confidence is computed based on the relation detector confidence score and relation popularity in the corpus.
Experiments
If we detect multiple relations in the question, and the same answer is generated from more than one relations, we sum up all those confidence scores to make such answers more preferable.
Experiments
In this scenario, we sort the answers based upon the confidence scores and only consider up to p answers for each question.
confidence score is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Wintrode, Jonathan and Khudanpur, Sanjeev
Introduction
Confidence scores from an ASR system (which incorporate N-gram probabilities) are optimized in order to produce the most likely sequence of words rather than the accuracy of individual word detections.
Term Detection Re-scoring
For each term t and document d we propose interpolating the ASR confidence score for a particular detection td with the top scoring hit in d which we’ll call £1.
Term Detection Re-scoring
However to verify that this approach is worth pursuing, we sweep a range of small 04 values, on the assumption that we still do want to mostly rely on the ASR confidence score for term detection.
confidence score is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: