Index of papers in Proc. ACL 2011 that mention
  • probabilistic model
Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher
Introduction
The semantic component of our model learns word vectors via an unsupervised probabilistic model of documents.
Our Model
To capture semantic similarities among words, we derive a probabilistic model of documents which learns word representations.
Our Model
We build a probabilistic model of a document using a continuous mixture distribution over words indexed by a multidimensional random variable 6.
Our Model
Equation 1 resembles the probabilistic model of LDA (Blei et al., 2003), which models documents as mixtures of latent topics.
probabilistic model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Tan, Ming and Zhou, Wenli and Zheng, Lei and Wang, Shaojun
Composite language model
A PLSA model (Hofmann, 2001) is a generative probabilistic model of word-document co-occurrences using the bag-of-words assumption described as follows: (i) choose a document d with probability p(d); (ii) SEMANTIZER: select a semantic class 9 with probability p(g|d); and (iii) WORD-PREDICTOR: pick a word 21) with probability p(w|g).
Composite language model
Since only one pair of (d, w) is being observed, as a result, the joint probability model is a mixture of log-linear model with the expression p(d, w) = p(d) Zg p(wlg)p(9|d)- Typically, the number of documents and vocabulary size are much larger than the size of latent semantic class variables.
Training algorithm
The TAGGER and CONSTRUCTOR are conditional probabilistic models of the type p(u|zl, - - - ,2“) where u, 21, - - - ,zn belong to a mixed set of words, POS tags, NTtags, CONSTRUCTOR actions (u only), and 21, - - - ,2“, form a linear Markov chain.
Training algorithm
The WORD-PREDICTOR is, however, a conditional probabilistic model p(w|w:,11+1h:,1ng) where there are three kinds of context 21):; +1, bin and g, each forms a linear Markov chain.
probabilistic model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Neubig, Graham and Watanabe, Taro and Sumita, Eiichiro and Mori, Shinsuke and Kawahara, Tatsuya
Abstract
This allows for a completely probabilistic model that is able to create a phrase table that achieves competitive accuracy on phrase-based machine translation tasks directly from unaligned sentence pairs.
Hierarchical ITG Model
By doing so, we are able to do away with heuristic phrase extraction, creating a fully probabilistic model for phrase probabilities that still yields competitive results.
Related Work
A generative probabilistic model where longer units are built through the binary combination of shorter units was proposed by de Marcken (1996) for monolingual word segmentation using the minimum description length (MDL) framework.
probabilistic model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: