Prerequisite: Bayesian Inference I Course description: Following on from Bayesian Inference I, this course addresses advanced topics in the theory and practice of Bayesian analysis. Foundational aspects are introduced that clarify the Bayesian understanding of probability, and justify its use in scientific inference. The thorny topic of prior selection is discussed at length, with particular emphasis on common pitfalls and misunderstandings. Computational methods to sample from posterior distributions are presented, focusing on several popular and powerful Markov Chain Monte Carlo algorithms and nested sampling.

The last section discussed Bayesian model comparison, contrasting it with frequentist hypothesis testing and clarifying the ontological difference. Computational methods are the introduced for the evaluation of the central quantity for model comparison, the Bayes Factor. The course is supported by 4 hands-on labs with exercises designed to put into practice the theoretical material.

Syllabus:

1) Foundations of Bayesianism: definition of probability; frequentist vs Bayesian; Cox theorem; de Finetti exchangeability theorem and interpretation.

2) Priors: ignorance priors; the principle of invariance; Jeffreys' prior; reference priors; conjugate priors; empirical Bayes; recommendations for prior choice.

3) Advanced sampling methods: Markov Chain Monte Carlo (MCMC); convergence criteria for MCMC; Gibbs sampling; partially collapsed Gibbs; Data augmentation, ancillarity-sufficiency interweaving strategy; Hamiltonian Monte Carlo (HMC); ensemble MC; importance sampling; slice sampling.

4) Bayesian model comparison: stopping rule paradox, frequentist hypothesis testing and p-values, the Jeffreys-Lindley paradox; the Bayesian evidence, the Bayes factor and its interpretation; the Savage-Dickey Density Ratio; prior selection for model comparison; computation: the Bayesian Information Criterion, bridge sampling, Laplace approximation, nested sampling (MultiNest algorithm).