Index of papers in Proc. ACL 2014 that mention
  • probability distribution
Saluja, Avneesh and Hassan, Hany and Toutanova, Kristina and Quirk, Chris
Generation & Propagation
Both parallel and monolingual corpora are used to obtain these probability distributions over target phrases.
Generation & Propagation
If a source phrase is found in the baseline phrase table it is called a labeled phrase: its conditional empirical probability distribution over target phrases (estimated from the parallel data) is used as the label, and is sub-
Generation & Propagation
We then propagate by deriving a probability distribution over these target phrases using graph propagation techniques.
Introduction
We then limit the set of translation options for each unlabeled source phrase (§2.3), and using a structured graph propagation algorithm, where translation information is propagated from labeled to unlabeled phrases proportional to both source and target phrase similarities, we estimate probability distributions over translations for
probability distribution is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Hingmire, Swapnil and Chakraborti, Sutanu
Background 3.1 LDA
Draw a word: wdm N Multinomial(zd,n) Where, T is the number of topics, 9b,; is the word probabilities for topic 75, 6d is the topic probability distribution , 2d,“, is topic assignment and wdm is word assignment for nth word position in document d respectively.
Experimental Evaluation
(a) Infer a probability distribution 0d over class labels using M D using Equation 3.
Introduction
We use the labeled topics to find probability distribution of each training document over the class labels.
Topic Sprinkling in LDA
We use this new model to infer the probability distribution of each unlabeled training document over the class labels.
Topic Sprinkling in LDA
While classifying a test document, its probability distribution over class labels is inferred using TS-LDA model and it is classified to its most probable class label.
probability distribution is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Baroni, Marco and Dinu, Georgiana and Kruszewski, Germán
Distributional semantic models
The word2vec toolkit implements two efficient alternatives to the standard computation of the output word probability distributions by a softmax classifier.
Distributional semantic models
Hierarchical softmax is a computationally efficient way to estimate the overall probability distribution using an output layer that is proportional to log(unigram.perplexity(W)) instead of W (for W the vocabulary size).
Introduction
Allocation (LDA) models (Blei et al., 2003; Griffiths et al., 2007), where parameters are set to optimize the joint probability distribution of words and documents.
probability distribution is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Beltagy, Islam and Erk, Katrin and Mooney, Raymond
Background
MLNs define a probability distribution over possible worlds, where a world’s probability increases exponentially with the total weight of the logical clauses that it satisfies.
Background
Given a set of weighted logical formulas, PSL builds a graphical model defining a probability distribution over the continuous space of values of the random variables in the model.
Background
Using distance to satisfaction, PSL defines a probability distribution over all possible interpretations I of all ground atoms.
probability distribution is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kang, Jun Seok and Feng, Song and Akoglu, Leman and Choi, Yejin
Pairwise Markov Random Fields and Loopy Belief Propagation
and x to observed ones X (variables with known labels, if any), our objective function is associated with the following joint probability distribution
Pairwise Markov Random Fields and Loopy Belief Propagation
A message mizj is sent from node i to node j and captures the belief of 2' about j, which is the probability distribution over the labels of j; i.e.
Pairwise Markov Random Fields and Loopy Belief Propagation
what i “thinks” j’s label is, given the current label of i and the type of the edge that connects i and j. Beliefs refer to marginal probability distributions of nodes over labels; for example denotes the belief of node 2' having label 3),.
probability distribution is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhang, Hui and Chiang, David
Count distributions
In the E step of EM, we compute a probability distribution (according to the current model) over all possible completions of the observed data, and the expected counts of all types, which may be fractional.
Word Alignment
The IBM models and related models define probability distributions p(a, f | e, 6), which model how likely a French sentence f is to be generated from an English sentence e with word alignment a.
Word Alignment
Different models parameterize this probability distribution in different ways.
probability distribution is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: