Wednesday, November 7, 2007

Simple Illustration of Bayes Factors

Suppose we collect the number of accidents in a year for 30 Belgium drivers. We assume that y1,..., y30 are independent Poisson(lambda), where lambda is the average number of accidents for all Belgium drivers.

Consider the following four priors for lambda:

PRIOR 1: lambda is gamma with shape 3.8 and rate 8.1. This prior reflects the belief that the quartiles of lambda are 0.29 and 0.60.

PRIOR 2: lambda is gamma with shape 3.8 and rate 4. The mean of this prior is 0.95 so this prior reflects one's belief that lambda is close to 1.

PRIOR 3: lambda is gamma with shape 0.38 and 0.81. This prior has the same mean as PRIOR 1, but it is much more diffuse, reflecting weaker information about lambda.

PRIOR 4: log lambda is normal with mean -0.87 and standard deviation 0.60. On the surface, this looks different from the previous priors, but this prior also matches the belief that the quartiles of lambda are 0.29 and 0.60.

Suppose we observe some data -- for the 30 drivers, 22 had no accidents, 7 had exactly one accident, and 1 had two accidents. The likelihood is given by

LIKE = exp(-30 lambda) lambda^9

In the below graphs, we display the likelihood in blue and show the four priors in red. Here's the R code to produce one of the graphs. We simulate draws from the likelihood and the prior and display density estimates.

like=rgamma(10000,shape=10,rate=30)
p1=rgamma(10000,shape=3.8,rate=8.1)
plot(density(p1),xlim=c(0,3),ylim=c(0,4),
main="PRIOR 1",xlab="LAMBDA",lwd=3,col="red",col.main="red")
lines(density(like),lwd=3,col="blue")
text(1.2,3,"LIKELIHOOD",col="blue")

Note that Priors 1 and 4 are pretty consistent with the likelihood. There is some conflict between Prior 2 and the likelihood and Prior 3 is pretty flat relative to the likelihood.

We can compare the four models by use of Bayes factors. We first compute a function that computes the log posterior for each prior. There already is a function logpoissgamma in the LearnBayes package that computes the posterior of log lambda with Poisson sampling and a gamma prior. (This can be used for priors 1, 2, and 3.) The function logpoissnormal can be used for Poisson sampling and a normal prior (prior 4). Then we use the function laplace to approximate the value of the log predictive density.

For example, here's the code to compute the log marginal density for prior 1.

datapar=list(data=d,par=c(3.8,8.1))
laplace(logpoissgamma,.5,datapar)$int
0.4952788

So log m(y) for prior 1 is about 0.5.

We do this for each prior and get the following values:

model log m(y)
-----------------
PRIOR 1 0.495
PRIOR 2 -0.729
PRIOR 3 - 0.454
PRIOR 4 0.558

We can use this output to compute Bayes factors. For example, the Bayes factor in support of PRIOR 1 over PRIOR 2 is

BF_12 = exp(0.495 - (-0.729)) = 3.4

This means that the model with PRIOR 1 is about three and a half times as likely as the model with PRIOR 2. This is not surprising, seeing the conflict between the likelihood and the Bayes factor in the graph.

No comments: