Index of papers in Proc. ACL 2009 that mention
  • Gibbs sampler
Mochihashi, Daichi and Yamada, Takeshi and Ueda, Naonori
Abstract
In this paper, we propose a new Bayesian model for fully unsupervised word segmentation and an efficient blocked Gibbs sampler combined with dynamic programming for inference.
Experiments
Since our algorithm converges rather fast, we ran the Gibbs sampler of trigram NPYLM for 200 iterations to obtain the results in Table 1.
Experiments
In all cases we removed all whitespaces to yield raw character strings for inference, and set L = 4 for Chinese and L = 8 for Japanese to run the Gibbs sampler for 400 iterations.
Experiments
9Notice that analyzing a test data is not easy for character-wise Gibbs sampler of previous work.
Inference
To find the hidden word segmentation w of a string 3 = 01 - - - c N, which is equivalent to the vector of binary hidden variables 2 = 21 - - - ZN, the simplest approach is to build a Gibbs sampler that randomly selects a character c,- and draw a binary decision 2,- as to whether there is a word boundary, and then update the language model according to the new segmentation (Goldwater et al., 2006; Xu et al., 2008).
Inference
4.1 Blocked Gibbs sampler
Inference
Instead, we propose a sentence-wise Gibbs sampler of word segmentation using efficient dynamic programming, as shown in Figure 3.
Introduction
However, they are still na‘1've with respect to word spellings, and the inference is very slow owing to inefficient Gibbs sampling .
Introduction
Section 4 describes an efficient blocked Gibbs sampler that leverages dynamic programming for inference.
Gibbs sampler is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Reisinger, Joseph and Pasca, Marius
Experimental Setup 4.1 Data Analysis
Per-Node Distribution: In stDA and ssLDA, attribute rankings can be constructed directly for each WN concept 0, by computing the likelihood of attribute 212 attaching to c, £(c|w) = p(w|c) averaged over all Gibbs samples (discarding a fixed number of samples for burn-in).
Hierarchical Topic Models 3.1 Latent Dirichlet Allocation
This distribution can be approximated efficiently using Gibbs sampling .
Hierarchical Topic Models 3.1 Latent Dirichlet Allocation
An efficient Gibbs sampling procedure is given in (Blei et al., 2003a).
Results
Precision was manually evaluated relative to 23 concepts chosen for broad coverage.7 Table 1 shows precision at n and the Mean Average Precision (MAP); In all LDA-based models, the Bayes average posterior is taken over all Gibbs samples
Results
Inset plots show log-likelihood of each Gibbs sample , indicating convergence except in the case of nCRP.
Gibbs sampler is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Snyder, Benjamin and Naseem, Tahira and Barzilay, Regina
Experimental setup
This model uses the same inference procedure as our bilingual model ( Gibbs sampling ).
Experimental setup
We also reimplemented the original EM version of CCM and found virtually no difference in performance when using EM or Gibbs sampling .
Model
We use Gibbs sampling (Hastings, 1970) to draw trees for each sentence conditioned on those drawn for
Model
This use of a tractable proposal distribution and acceptance ratio is known as the Metropolis-Hastings algorithm and it preserves the convergence guarantee of the Gibbs sampler (Hastings, 1970).
Gibbs sampler is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: