Next Contents Previous

7. CONCLUSIONS

Bayesian probability theory offers a consistent framework to deal with uncertainty in several different situations, from parameter inference to model comparison, from prediction to optimization. The notion of probability as a degree of belief is far more general than the restricted view of probability as frequency, and it can be applied equally well both to repeatable experiments and to one-off situations. We have seen how Bayes' theorem is a unique prescription to update our state of knowledge in the light of the available data, and how the basic laws of probability can be used to incorporate all sorts of uncertainty in our inferences, including noise (measurement uncertainty), systematic errors (hyper-parameters), imperfect knowledge of the system (nuisance parameters) and modelling uncertainty (model comparison and model averaging). The same laws can equally well be applied to the problem of prediction, and there is considerable potential for a systematic exploration of experiment optimization and Bayesian decision theory (e.g., given what we know about the Universe and our theoretical models, what are the best observations to achieve a certain scientific goal?).

The exploration of the full potential of Bayesian methods is only just beginning. Thanks to the increasing availability of cheap computational power, it now becomes possible to handle problems that were of intractable complexity until a few years ago. Markov Chain Monte Carlo techniques are nowadays a standard inference tool to derive parameter constraints, and many algorithms are available to explore the posterior pdf in a variety of settings. We have highlighted how the issue of priors — which has traditionally been held against Bayesian methods — is a false problem stemming from a misunderstanding of what Bayes' theorem says. This is not to deny that it can be difficult in practice to choose a prior that is a fair representation of one's degree of belief. But we should not shy away from this task — the fact is, there is no inference without assumptions and a correct application of Bayes' theorem forces us to be absolutely clear about which assumptions we are making. It remains important to quantify as much as possible the extent by which our priors are influencing our results, since in many cases when working at the cutting-edge of research we might not have the luxury of being in a data-dominated regime.

The model comparison approach can formalize in a quantitative manner the intuitive assessment of scientific theories, based on the Occam's razor notion that simpler models ought to be preferred if they offer a satisfactory explanation for the observations. The Bayesian evidence and complexity tell us which models are supported by the data, and what is their effective number of parameters. Multi-model inference delivers model-averaged parameter constraints, thus merging the two levels of parameter inference and model comparison.

The application of Bayesian tools to cosmology and astrophysics is blossoming. As both data sets and models become more complex, our inference tools must acquire a corresponding level of sophistication, as basic statistical analyses that served us well in the past are no longer up to the task. There is little doubt that the field of cosmostatistics will grow in importance in the future, and Bayesian methods will have a great role to play.


Acknowledgments

I am grateful to Stefano Andreon, Sarah Bridle, Chris Gordon, Andrew Liddle, Nicolai Meinshausen and Joe Silk for comments on an earlier draft and for stimulating discussions, and to Martin Kunz, Louis Lyons, Mike Hobson and Steffen Lauritzen for many useful conversations. This work is supported by the Royal Astronomical Society through the Sir Norman Lockyer Fellowship, and by St Anne's College, Oxford.

Next Contents Previous