Saturday, November 24, 2007

A Comparison of Two MCMC Methods

In the last posting, I illustrated a Gibbs sampling algorithm for simulating from a normal/normal hierarchical model. This method was based on successive substitution sampling from the conditional posterior densities of theta1, ..., thetak, mu, and tau2. A second sampling method is outlined in Chapter 7 of BCUR. In this method that we'll call "Exact MCMC" one integrates out the first-stage parameters theta1, ..., thetak and uses a random-walk Metropolis algorithm to simulate from the marginal posterior of (mu, log tau).

We'll use a few pictures to compare these two methods, specifically in learning about the variance hyperparameter tau. Here's the exact MCMC method:

1. We write a function defining the log posterior of mu and log tau.

2. We simulate from (mu, log tau) using the function rwmetrop.R in the LearnBayes package. We choose a suitable proposal function from output from the laplace.R function. The acceptance rate of this random walk algorithm is 20%. We sample for 10,000 iterations, saving the draws of log tau in a vector.

We then run the Gibbs sampler for 10,000 iterations, also saving the draws of log tau.

DENSITY PLOTS. We first compare density estimates of the simulated samples of log tau using the two methods. The two estimates seem pretty similar. The density estimate for the Gibbs sampling draws looks smoother, but that doesn't mean this sample is a better estimate of the posterior of log tau.



TRACE PLOTS. We compare trace plots of the two sets of simulated draws. They look pretty similar, but there are differences in closer inspection. The draws from the Gibbs sampling run look more irregular, or perhaps they have more of a "snake-like" appearance.



AUTOCORRELATION PLOTS. A autocorrelation plot is a graph of the autocorrelations

corr (theta(j), theta(j-g))

graphed against the lag g. For most MCMC runs, the values of the stream of simulated draws will show positive autocorrelation, but hopefully the autocorrelation values decrease quickly as the lag increases. It is pretty obvious that the lag autocorrelation values drop off slower for method 2 (Gibbs sampling); in contrast, the lag autocorrelations decrease to zero for method 1 (Exact MCMC) for values of the lag from 1 to 15.


The moral of the story is that the exact MCMC method shows better mixing than Gibbs sampling in this example. This means that, for a given sample size (here 10,000), one will have more accurate estimates at the posterior of tau using the exact MCMC method.

No comments: