next up previous
Next: Bayesian Posterior Comprehension Up: Bayesian Posterior Comprehension via Previous: Bayesian Posterior Comprehension via


Introduction

The Minimum Message Length (MML) principle (Wallace and Boulton, 1968; Wallace and Freeman, 1987; Wallace and Dowe, 1999; Wallace and Boulton, 1975) is often considered to be a Bayesian method for model class selection and (invariant) point estimation. This is apparently due to the method of the widely used MML87 approximation (Wallace and Freeman, 1987). Such a description is a generalisation that does not hold for all MML approximations since for strict MML (Wallace and Boulton, 1975) (Wallace and Freeman, 1987, page 242) (Wallace and Dowe, 1999), and some other approximations (including those we present in this paper), the notion of model selection does not exist.

A more general description of MML methods is that they give an invariant criterion for selecting a countable set of weighted point estimates from a Bayesian posterior distribution/density. The derivation and definition of the ``objective" functions found in the many MML approximations are motivated by ideas from information theory and Bayesian statistics. What all of the MML approximations have in common is that they attempt to estimate the codebook which minimises the expected length of a special1 two-part message encoding the point estimate and data.

The MML principle complements standard Bayesian methods. It provides an invariant and ``objective" means to construct an epitome, or brief summary, of a posterior distribution. Such an epitome can be used for point estimation, human comprehension and for fast approximation of posterior expectations. In this paper we investigate a Markov Chain Monte Carlo-based methodology called Message from Monte Carlo (MMC) (Fitzgibbon, Dowe, and Allison, 2002b,a) that is being developed for constructing MML epitomes. The contribution of this paper is in the refinement of the MMC method - we use more accurate approximations, give extensions to the algorithms, and investigate the behaviour of the method on new problems.

In the first section we briefly define the problem of constructing an epitome of a posterior distribution. We then discuss the use of the MML instantaneous codebook as an epitome that has desirable properties which we describe as Bayesian Posterior Comprehension. In Section 3 we introduce elements of the Message from Monte Carlo (MMC) methodology. In Section 4 we give an MMC algorithm suitable for unimodal likelihood functions of fixed dimension. The algorithm is demonstrated for parameter estimation in a binomial regression problem and link selection in a generalised linear model. Section 5 briefly discusses an algorithm suitable for multimodal likelihood functions of fixed dimension. An algorithm for variable dimension posterior distributions is given in Section 6 and demonstrated using a multiple change-point estimation problem and synthetic data. Further work is discussed in Section 7, and the conclusion can be found in Section 8.


next up previous
Next: Bayesian Posterior Comprehension Up: Bayesian Posterior Comprehension via Previous: Bayesian Posterior Comprehension via
2003-04-23