Index of papers in Proc. ACL 2013 that mention
  • probability distribution
Ogura, Yukari and Kobayashi, Ichiro
Experiment
Furthermore, the hyper-parameters for topic probability distribution and word probability distribution in LDA are a=0.5 and [3:05, respectively.
Experiment
Here, in the case of clustering the documents based on the topic probabilistic distribution by LDA, the topic distribution over documents 6 is changed in every estimation.
Experiment
To measure the latent similarity among documents, we construct topic vectors with the topic probabilistic distribution , and then adopt the Jensen-Shannon divergence to measures it, on the other hand, in the case of using document vectors we adopt cosine similarity.
Techniques for text classification
After obtaining a collection of refined documents for classification, we adopt LDA to estimate the latent topic probabilistic distributions over the target documents and use them for clustering.
Techniques for text classification
In this study, we use the topic probability distribution over documents to make a topic vector for each document, and then calculate the similarity among documents.
Techniques for text classification
Here, N is the number of all words in the target documents, wmn is the nth word in the m-th document; 6 is the topic probabilistic distribution for the documents, and gb is the word probabilistic distribution for every topic.
probability distribution is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Carpuat, Marine and Daume III, Hal and Henry, Katharine and Irvine, Ann and Jagarlamudi, Jagadeesh and Rudinger, Rachel
Experiments
While the SENSESPOTTING task has MT utility in suggesting which new domain words demand a new translation, the MOSTFRE-QSENSECHANGE task has utility in suggesting which words demand a new translation probability distribution when shifting to a new domain.
New Sense Indicators
Second, given a source word 3, we use this classifier to compute the probability distribution of target translations (p(t|s)).
New Sense Indicators
Subsequently, we use this probability distribution to define new features for the SENSESPOTTING task.
New Sense Indicators
Entropy is the entropy of the probability distribution : — 2t p(t|s) log p(t|s).
probability distribution is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Metallinou, Angeliki and Bohus, Dan and Williams, Jason
Generative state tracking
(1996)) models the conditional probability distribution of the label 3/ given features X, p(y|x) via an exponential model of the form:
Introduction
The task is to assign a probability distribution over the G dialog state hypotheses, plus a meta-hypothesis which indicates that none of the G hypotheses is correct.
Introduction
Also note that the dialog state tracker is not predicting the contents of the dialog state hypotheses; the dialog state hypotheses contents are given by some external process, and the task is to predict a probability distribution over them, where the probability assigned to a hypothesis indicates the probability that it is correct.
Introduction
Dialog state tracking can be seen an analogous to assigning a probability distribution over items on an ASR N-best list given speech input and the recognition output, including the contents of the N-best list.
probability distribution is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Chen, Boxing and Kuhn, Roland and Foster, George
Vector space model adaptation
Thus, we get the probability distribution of a phrase pair or the phrase pairs in the dev data across all subcorpora:
Vector space model adaptation
To further improve the similarity score, we apply absolute discounting smoothing when calculating the probability distributions p,( f, e).
Vector space model adaptation
We carry out the same smoothing for the probability distributions pi(dev).
probability distribution is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Silberer, Carina and Ferrari, Vittorio and Lapata, Mirella
Attribute-based Classification
For each image iw E Iw of concept w, we output an F -dimensional vector containing prediction scores scorea(iw) for attributes a = 1, ...,F. We transform these attribute vectors into a single vector pw 6 [0,1]1XF, by computing the centroid of all vectors for concept w. The vector is normalized to obtain a probability distribution over attributes given w:
Attribute-based Semantic Models
Let P E [0, 1]N XF denote a visual matrix, representing a probability distribution over visual attributes for each word.
Experimental Setup
We can thus compute the probability distribution over associates for each cue.
probability distribution is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Tanigaki, Koichi and Shiba, Mitsuteru and Munaka, Tatsuji and Sagisaka, Yoshinori
Simultaneous Optimization of All-words WSD
Shadow thickness and surface height represents the composite probability distribution of all the twelve kernels.
Simultaneous Optimization of All-words WSD
The cluster centers are located at the means of hypotheses including miscellaneous alternatives not intended, thus the estimated probability distribution is, roughly speaking, offset toward the center of WordNet, which is not what we want.
Smoothing Model
Figure 1: Proposed probability distribution model for context-to-sense mapping space.
probability distribution is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhu, Zede and Li, Miao and Chen, Lei and Yang, Zhenxin
Bilingual LDA Model
denotes the vocabulary probability distribution in the topic k; M denotes the document number; 6m
Bilingual LDA Model
denotes the topic probability distribution in the document m; Nm denotes the length of m; me
Introduction
Preiss (2012) transformed the source language topical model to the target language and classified probability distribution of topics in the same language, whose shortcoming is that the effect of model translation seriously hampers the comparable corpora quality.
probability distribution is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: