Advertisement

# Some Bayesian Concepts

• Scott A. Pardo
Chapter

## Abstract

Bayesian statistical methods are based on Bayes’ theorem, which was described in Chap. . Suppose X represents a continuously-valued random variable, and f(x|θ) is its density function given parameter θ. In the Bayesian world, the parameter θ is also treated as a random variable, with a density function g(θ). This density is referred to as the prior density for θ, inasmuch as it is formulated prior to obtaining any observations of X. The idea is that g(θ) represents our prior belief about the likelihood that θ takes on a value in any particular range. Generally, g(θ) is also a function of some other parameters, which we will call hyperparameters, whose values are chosen to reflect the prior belief about the possible range of values for θ. The observation of X, call it x, is assumed to be dependent on the value of θ. The dependency is expressed as a likelihood function, symbolized by L(x|θ). Once the data, x, are observed, the Bayesian would like to update his or her belief concerning the probability that the unknown parameter, θ, falls in any particular range. The updated belief is expressed as a conditional density function, called the posterior density, and is expressed as g(θ|x). Bayes’ theorem provides a method for deriving the posterior density given the prior density and the likelihood function:
$$g\left(\theta \Big|x\right)=\frac{L\left(x\Big|\theta \right)g\left(\theta \right)}{{\int}_{-\infty}^{+\infty }}L\left(x\Big|\tau \right)g\left(\tau \right)d\tau }$$
Conjugacy is a condition that greatly simplifies computations. A likelihood function and a prior distribution are said to be a conjugate pair if the resulting posterior distribution is of the same form as the prior. For conjugate pairs, the posterior distribution has hyperparameters whose values are generally a closed-form function of the prior hyperparameters and the data. Generally this relationship is the reason why conjugacy greatly simplifies computations.

## Keywords

Posterior Distribution Monte Carlo Markov Chain Likelihood Function Prior Distribution Credible Interval
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

## References

1. Box, G. E. P., & Tiao, G. C. (1973). Bayesian inference in statistical analysis. New York: Wiley.Google Scholar
2. Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (1997). Bayesian data analysis. Boca Raton, FL: Chapman & Hall/CRC Press.Google Scholar
3. Judge, G. G., Griffiths, W. E., Hill, R. C., Lütkepohl, H., & Lee, T. C. (1985). The theory and practice of econometrics (2nd ed.). New York: Wiley.Google Scholar

## Copyright information

© Springer International Publishing Switzerland 2016

## Authors and Affiliations

• Scott A. Pardo
• 1
1. 1.Ascensia Diabetes CareParsippanyUSA