Abstract
The popularity of Bayesian statistics is largely due to advances in computing and developments in computational methods. Currently, there are two main types of Bayesian computational methods. The first type involves iterative Monte Carlo simulation and includes the Gibbs sampler, the Metropolis-Hastings algorithm, Hamiltonian sampling etc. These first type methods typically generate a Markov chain whose stationary distribution is the target distribution. The second type involves distributional approximation and includes Laplace approximation (Laplace 1785, 1810), variational Bayes (Jordan et al. 1999), etc. These second type methods try to find a distribution with the analytical form that best approximates the target distribution. In Sect. 3.1, we review Markov chain Monte Carlo (MCMC) methods including the general Metropolis-Hastings algorithm (M-H), Gibbs sampler with conjugacy, and Hamiltonian Monte Carlo (HMC) algorithm (Neal 1994). Section 3.2 discusses the convergence and efficiency of the above sampling methods. We then show how to specify a Bayesian model and draw model inferences using OpenBUGS and Stan in Sect. 3.3. Section 3.4 provides a brief summary on the mode-based approximation methods including Laplace approximation and Bayesian variational inference. Finally, in Sect. 3.5, a full Bayesian analysis is performed on a biological data set from Gelfand et al. (1990). The key concepts and the computational tools discussed in this chapter are demonstrated in this section.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent Dirichlet allocation. Journal of Machine Learning Research, 3, 993–1022.
Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of Computational and Graphical Statistics, 7, 434–455.
Carpenter, B., Gelman, A., Hoffman, M., Lee, D., Goodrich, B., Betancourt, M., et al. (2017). Stan: A probabilistic programming language. Journal of Statistical Software, 76, 1–32.
Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39, 1–38.
Duane, S., Kennedy, A. D., Pendleton, B. J., & Roweth, D. (1987). Hybrid Monte Carlo. Physics Letters B, 195, 216–222.
Gelfand, A. E., Hills, S. E., Racinepoon, A., & Smith, A. F. M. (1990). Illustration of Bayesian-inference in normal data models using Gibbs sampling. Journal of the American Statistical Association, 85, 972–985.
Gelman, A., Lee, D., & Guo, J. (2015). Stan: A probabilistic programming language for Bayesian inference and optimization. Journal of Educational and Behavioral Statistics, 40, 530–543 .
Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7, 457–472.
Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2014). Bayesian data analysis (3rd ed.). Boca Raton: Chapman & Hall.
Geman, S., & Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 721–741.
Gershman, S., Hoffman, M., & Blei, D. (2012). Nonparametric variational inference. In 29th International Conference on Machine Learning.
Gilks, W. R., & Wild, P. (1992). Adaptive rejection sampling for Gibbs sampling. Journal of the Royal Statistical Society C, 41, 337–348.
Gilks, W. R., Richardson, S., & Spiegelhalter, D. J. (1996). Monte Carlo Markov chain in practice. New York: Chapman & Hall.
Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97–109.
Hills, S. E., & Smith, A. F. M. (1992). Parameterization issues in Bayesian inference. London: Oxford University Press.
Hoffman, M. D., Blei, D. M., Wang, C., & Paisley, J. (2013). Stochastic variational inference. Journal of Machine Learning Research, 14, 1303–1347.
Homan, M. D., & Gelman, A. (2014). The no-u-turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. The Journal of Machine Learning Research, 15, 1593–1623.
Jaakkola, T. S., & Jordan, M. I. (2000). Bayesian parameter estimation via variational methods. Statistics and Computing, 10, 25–37.
Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., & Saul, L. K. (1999). An introduction to variational methods for graphical models. Machine Learning, 37, 183–233.
Kucukelbir, A., Ranganath, R., Gelman, A., & Blei, D. M. (2015). Automatic variational inference in Stan. arXiv:1506.03431.
Laplace, P. S. (1785). Memoire sur les approximations des formules qui sont fonctions de tres grands nombres. In Memoires de l’Academie Royale des Sciences.
Laplace, P. S. (1810). Memoire sur les approximations des formules qui sont fonctions de tres grands nombres, et sur leur application aux probabilites. In Memoires de l’Academie des Science de Paris.
Lunn, D. J., Thomas, A., Best, N., & Spiegelhalter, D. (2000). WinBUGS–A Bayesian modelling framework: Concepts, structure, and extensibility. Statistics and Computing, 10, 325–337.
Lunn, D., Jackson, C., Best, N., Thomas, A., & Spiegelhalter, D. (2012). The BUGS book: A practical introduction to Bayesian analysis. Boca Raton: Chapman & Hall.
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., & Teller, E. (1953). Equation of state calculations by fast computing machines. Journal of Chemical Physics, 21, 1087–1092.
Neal, R. M. (2011). MCMC using Hamiltonian dynamics. In Handbook of Markov chain Monte Carlo.
Neal, R. M. (1994). An improved acceptance procedure for the hybrid Monte Carlo algorithm. Journal of Computational Physics, 111, 194–203.
Neal, R. M. (2003). Slice sampling. The Annals of Statistics, 31, 705–741.
Nocedal, J., & Wright, S. (2006). Numerical optimization. New York: Springer Science & Business Media.
Roberts, G. O., & Sahu, S. K. (1997). Updating schemes, correlation structure, blocking and parameterization for the Gibbs sampler. Journal of the Royal Statistical Society B, 59, 291–317.
Spiegelhalter, D., Thomas, A., Best, N., and Lunn, D. (2003). WinBUGS user manual. http://www.mrc-bsu.cam.ac.uk/wp-content/uploads/manual14.pdf.
Stan Development Team (2014). Stan modeling language: User’s guide and reference manual. http://mc-stan.org/users/documentation/
Tanner, M. A., & Wong, W. H. (1987). The calculation of posterior distributions by data augmentation. Journal of the American statistical Association, 82, 528–540.
Vehtari, A., Gelman, A., & Gabry, J. (2015). Efficient implementation of leave-one-out cross-validation and WAIC for evaluating fitted Bayesian models. arXiv:1507.04544.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2018 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Gao, G. (2018). Advanced Bayesian Computation. In: Bayesian Claims Reserving Methods in Non-life Insurance with Stan. Springer, Singapore. https://doi.org/10.1007/978-981-13-3609-6_3
Download citation
DOI: https://doi.org/10.1007/978-981-13-3609-6_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-3608-9
Online ISBN: 978-981-13-3609-6
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)