Abstract
This chapter is devoted to some basic ideas on the Bayesian approach to statistical inference, where the parameter is treated as a random variable, which is assigned a distribution. This distribution – the prior distribution – represents the prior knowledge about the parameter before observing the data. Once the data is observed, the inference about parameter is drawn from the posterior distribution – the conditional distribution of parameter given the data. The term “Bayesian” comes from the well-known Bayes theorem, which is a formula for computing the posterior probabilities.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Berger, J. O. (1985). Statistical decision theory and Bayesian analysis. Second edition. Springer, New York.
Lee, P. M. (2012). Bayesian Statistics: An Introduction, Fourth Edition. Wiley.
Mardia, K. V., Kent, J. T. and Bibby, J. M. (1979). Multivariate Analysis. Academic.
O’Hagan, A. (1994). Kendall’s Advanced Theory of Statistics: Bayesian Inference, Volume 2B. Edward Arnold.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer Science+Business Media, LLC, part of Springer Nature
About this chapter
Cite this chapter
Li, B., Babu, G.J. (2019). Basic Ideas of Bayesian Methods. In: A Graduate Course on Statistical Inference. Springer Texts in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-9761-9_5
Download citation
DOI: https://doi.org/10.1007/978-1-4939-9761-9_5
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4939-9759-6
Online ISBN: 978-1-4939-9761-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)