Advertisement

Introduction

  • Christian P. Robert
  • George Casella
Part of the Springer Texts in Statistics book series (STS)

Abstract

Until the advent of powerful and accessible computing methods, the experimenter was often confronted with a difficult choice. Either describe an accurate model of a phenomenon, which would usually preclude the computation of explicit answers, or choose a standard model which would allow this computation, but may not be a close representation of a realistic model. This dilemma is present in many branches of statistical applications, for example, in electrical engineering, aeronautics, biology, networks, and astronomy. To use realistic models, the researchers in these disciplines have often developed original approaches for model fitting that are customized for their own problems. (This is particularly true of physicists, the originators of Markov chain Monte Carlo methods.) Traditional methods of analysis, such as the usual numerical analysis techniques, are not well adapted for such settings.

Keywords

Posterior Distribution Failure Probability Maximum Likelihood Estimator Exponential Family Markov Chain Monte Carlo Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

  1. Diebolt, J. and Robert, C. (1994). Estimation of finite mixture distributions by Bayesian sampling. J. Royal Statist. Soc. Series B, 56: 363–375.MathSciNetzbMATHGoogle Scholar
  2. Jeffreys, H. (1961). Theory of Probability (3rd edition). Oxford University Press, Oxford.Google Scholar
  3. Berger, J. (1985). Statistical Decision Theory and Bayesian Analysis. Springer-Verlag, New York, second edition.zbMATHCrossRefGoogle Scholar
  4. Casella, G. (1996). Statistical theory and Monte Carlo algorithms (with discussion). TEST, 5: 249–344.MathSciNetzbMATHCrossRefGoogle Scholar
  5. Bernardo, J. and Smith, A. (1994). Bayesian Theory. John Wiley, New York.zbMATHCrossRefGoogle Scholar
  6. Geweke, J. (1992). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (with discussion). In Bernardo, J., Berger, J., Dawid, A., and Smith, A., editors, Bayesian Statistics 4, pages 169–193. Oxford University Press, Oxford.Google Scholar
  7. Bernardo, J. (1979). Reference posterior distributions for Bayesian inference (with discussion). J. Royal Statist. Soc. Series B, 41: 113–147.MathSciNetzbMATHGoogle Scholar
  8. Wakefield, J., Smith, A., Racine-Poon, A., and Gelfand, A. (1994). Bayesian analysis of linear and non-linear population models using the Gibbs sampler. Applied Statistics (Ser. C), 43: 201–222.zbMATHCrossRefGoogle Scholar
  9. Seidenfeld, T. and Wasserman, L. (1993). Dilation for sets of probabilities. Ann. Statist., 21: 1139–1154.MathSciNetzbMATHCrossRefGoogle Scholar
  10. Efron, B. (1979). Bootstrap methods: another look at the jacknife. Ann. Statist., 7: 1–26.MathSciNetzbMATHCrossRefGoogle Scholar
  11. Efron, B. (1982). The Jacknife, the Bootstrap and Other Resampling Plans, volume 38. SIAM, Philadelphia.Google Scholar
  12. Diaconis, P. and Holmes, S. (1994). Gray codes for randomization procedures. Statistics and Computing, 4: 287–302.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2004

Authors and Affiliations

  • Christian P. Robert
    • 1
  • George Casella
    • 2
  1. 1.CEREMADEUniversité Paris DauphineParis Cedex 16France
  2. 2.Department of StatisticsUniversity of FloridaGainesvilleUSA

Personalised recommendations