Skip to main content
  • 647 Accesses

Abstract

The popularity of Bayesian statistics is largely due to advances in computing and developments in computational methods. Currently, there are two main types of Bayesian computational methods. The first type involves iterative Monte Carlo simulation and includes the Gibbs sampler, the Metropolis-Hastings algorithm, Hamiltonian sampling etc. These first type methods typically generate a Markov chain whose stationary distribution is the target distribution. The second type involves distributional approximation and includes Laplace approximation (Laplace 1785, 1810), variational Bayes (Jordan et al. 1999), etc. These second type methods try to find a distribution with the analytical form that best approximates the target distribution. In Sect. 3.1, we review Markov chain Monte Carlo (MCMC) methods including the general Metropolis-Hastings algorithm (M-H), Gibbs sampler with conjugacy, and Hamiltonian Monte Carlo (HMC) algorithm (Neal 1994). Section 3.2 discusses the convergence and efficiency of the above sampling methods. We then show how to specify a Bayesian model and draw model inferences using OpenBUGS and Stan in Sect. 3.3. Section 3.4 provides a brief summary on the mode-based approximation methods including Laplace approximation and Bayesian variational inference. Finally, in Sect. 3.5, a full Bayesian analysis is performed on a biological data set from Gelfand et al. (1990). The key concepts and the computational tools discussed in this chapter are demonstrated in this section.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent Dirichlet allocation. Journal of Machine Learning Research, 3, 993–1022.

    MATH  Google Scholar 

  • Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of Computational and Graphical Statistics, 7, 434–455.

    MathSciNet  Google Scholar 

  • Carpenter, B., Gelman, A., Hoffman, M., Lee, D., Goodrich, B., Betancourt, M., et al. (2017). Stan: A probabilistic programming language. Journal of Statistical Software, 76, 1–32.

    Google Scholar 

  • Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39, 1–38.

    MathSciNet  MATH  Google Scholar 

  • Duane, S., Kennedy, A. D., Pendleton, B. J., & Roweth, D. (1987). Hybrid Monte Carlo. Physics Letters B, 195, 216–222.

    Article  Google Scholar 

  • Gelfand, A. E., Hills, S. E., Racinepoon, A., & Smith, A. F. M. (1990). Illustration of Bayesian-inference in normal data models using Gibbs sampling. Journal of the American Statistical Association, 85, 972–985.

    Article  Google Scholar 

  • Gelman, A., Lee, D., & Guo, J. (2015). Stan: A probabilistic programming language for Bayesian inference and optimization. Journal of Educational and Behavioral Statistics, 40, 530–543 .

    Article  Google Scholar 

  • Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7, 457–472.

    Article  Google Scholar 

  • Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2014). Bayesian data analysis (3rd ed.). Boca Raton: Chapman & Hall.

    MATH  Google Scholar 

  • Geman, S., & Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 721–741.

    Article  Google Scholar 

  • Gershman, S., Hoffman, M., & Blei, D. (2012). Nonparametric variational inference. In 29th International Conference on Machine Learning.

    Google Scholar 

  • Gilks, W. R., & Wild, P. (1992). Adaptive rejection sampling for Gibbs sampling. Journal of the Royal Statistical Society C, 41, 337–348.

    MATH  Google Scholar 

  • Gilks, W. R., Richardson, S., & Spiegelhalter, D. J. (1996). Monte Carlo Markov chain in practice. New York: Chapman & Hall.

    MATH  Google Scholar 

  • Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97–109.

    Article  MathSciNet  Google Scholar 

  • Hills, S. E., & Smith, A. F. M. (1992). Parameterization issues in Bayesian inference. London: Oxford University Press.

    Google Scholar 

  • Hoffman, M. D., Blei, D. M., Wang, C., & Paisley, J. (2013). Stochastic variational inference. Journal of Machine Learning Research, 14, 1303–1347.

    MathSciNet  MATH  Google Scholar 

  • Homan, M. D., & Gelman, A. (2014). The no-u-turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. The Journal of Machine Learning Research, 15, 1593–1623.

    MathSciNet  MATH  Google Scholar 

  • Jaakkola, T. S., & Jordan, M. I. (2000). Bayesian parameter estimation via variational methods. Statistics and Computing, 10, 25–37.

    Article  Google Scholar 

  • Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., & Saul, L. K. (1999). An introduction to variational methods for graphical models. Machine Learning, 37, 183–233.

    Article  Google Scholar 

  • Kucukelbir, A., Ranganath, R., Gelman, A., & Blei, D. M. (2015). Automatic variational inference in Stan. arXiv:1506.03431.

  • Laplace, P. S. (1785). Memoire sur les approximations des formules qui sont fonctions de tres grands nombres. In Memoires de l’Academie Royale des Sciences.

    Google Scholar 

  • Laplace, P. S. (1810). Memoire sur les approximations des formules qui sont fonctions de tres grands nombres, et sur leur application aux probabilites. In Memoires de l’Academie des Science de Paris.

    Google Scholar 

  • Lunn, D. J., Thomas, A., Best, N., & Spiegelhalter, D. (2000). WinBUGS–A Bayesian modelling framework: Concepts, structure, and extensibility. Statistics and Computing, 10, 325–337.

    Article  Google Scholar 

  • Lunn, D., Jackson, C., Best, N., Thomas, A., & Spiegelhalter, D. (2012). The BUGS book: A practical introduction to Bayesian analysis. Boca Raton: Chapman & Hall.

    MATH  Google Scholar 

  • Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., & Teller, E. (1953). Equation of state calculations by fast computing machines. Journal of Chemical Physics, 21, 1087–1092.

    Article  Google Scholar 

  • Neal, R. M. (2011). MCMC using Hamiltonian dynamics. In Handbook of Markov chain Monte Carlo.

    Google Scholar 

  • Neal, R. M. (1994). An improved acceptance procedure for the hybrid Monte Carlo algorithm. Journal of Computational Physics, 111, 194–203.

    Article  MathSciNet  Google Scholar 

  • Neal, R. M. (2003). Slice sampling. The Annals of Statistics, 31, 705–741.

    Article  MathSciNet  Google Scholar 

  • Nocedal, J., & Wright, S. (2006). Numerical optimization. New York: Springer Science & Business Media.

    MATH  Google Scholar 

  • Roberts, G. O., & Sahu, S. K. (1997). Updating schemes, correlation structure, blocking and parameterization for the Gibbs sampler. Journal of the Royal Statistical Society B, 59, 291–317.

    Article  MathSciNet  Google Scholar 

  • Spiegelhalter, D., Thomas, A., Best, N., and Lunn, D. (2003). WinBUGS user manual. http://www.mrc-bsu.cam.ac.uk/wp-content/uploads/manual14.pdf.

  • Stan Development Team (2014). Stan modeling language: User’s guide and reference manual. http://mc-stan.org/users/documentation/

  • Tanner, M. A., & Wong, W. H. (1987). The calculation of posterior distributions by data augmentation. Journal of the American statistical Association, 82, 528–540.

    Article  MathSciNet  Google Scholar 

  • Vehtari, A., Gelman, A., & Gabry, J. (2015). Efficient implementation of leave-one-out cross-validation and WAIC for evaluating fitted Bayesian models. arXiv:1507.04544.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guangyuan Gao .

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Gao, G. (2018). Advanced Bayesian Computation. In: Bayesian Claims Reserving Methods in Non-life Insurance with Stan. Springer, Singapore. https://doi.org/10.1007/978-981-13-3609-6_3

Download citation

Publish with us

Policies and ethics