Advertisement

Bayesian Group Analysis

  • W. Von Der Linden
  • V. Dose
  • A. Ramaswami
Part of the Fundamental Theories of Physics book series (FTPH, volume 98)

Abstract

In many fields of research the following problem is encountered: a large collection of data is given for which a detailed theory is yet missing. To gain insight into the underlying problem it is important to reveal the interrelationships in the data and to determine the relevant input and response quantities. A central part of this task is to find the natural splitting of the data into groups and to analyze the respective characteristics. Bayesian probability theory is invoked for a consistent treatment of these problems. Due to Ockham’s Razor, which is an integral part of the theory, the simplest group configuration that still fits the data has the highest probability. In addition the Bayesian approach allows to eliminate outliers, which otherwise could lead to erroneous conclusions. Simple textbook and mock data sets are analyzed in order to assess the Bayesian approach.

Key words

Auto-classification auto-clustering group analysis Mahalonobis distance 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    W. Krzanowski, ed., Principles of Multivariate Analysis, Oxford Science Publications, Oxford, 1988.zbMATHGoogle Scholar
  2. 2.
    G. Bretthorst, “On the difference in means,” in Physics and Probability Essays in honor of Edwin T. Jaynes, W. T. Grandy and P. W. Milonni, eds., vol. 1, pp. 177–194, Cambridge Univ. Press, Cambridge, 1993.CrossRefGoogle Scholar
  3. 3.
    J. Stutz and P. Cheeseman, “Autoclass - a bayesian approach to classification,” in Maximum Entropy and Bayesian Methods (1996), J. Skilling and S. Sibisi, eds., Kluwer Academic Publishers, Dordrecht, 1995. Further references: http://ic-www.arc.nasa.gov/ic/projects/bayesian-group/autoclass//ic/projects/bayesian-group/autoclass/.
  4. 4.
    G. Bretthorst, ed., Bayesian Spectrum Analysis and Parameter Estimation, Springer Press, Berlin, Heidelberg, 1988.zbMATHGoogle Scholar
  5. 5.
    G. Bretthorst, “Excerpts from bayesian spectrum analysis and parameter estimation,” in Maximum Entropy and Bayesian Methods in Science and Engineering, G. Erickson and C. Smith, eds., vol. 1, p. 75, Kluwer Academic Publishers, Dordrecht, 1988.CrossRefGoogle Scholar
  6. 6.
    G. Bretthorst, “An introduction to parameter estimation using bayesian probability theory,” in Maximum Entropy and Bayesian Methods in Science and Engineering, P. Fougere, ed., p. 53, Kluwer Academic Publishers, Dordrecht, 1989.Google Scholar
  7. 7.
    E. Jaynes, “Prior probabilities,” in E. T. Jaynes: Papers on Probability, Statistics and Statistical Physics, R. Rosenkrantz, ed., p. 114, Reidel, Dordrecht, 1983.Google Scholar
  8. 8.
    “Wishart distribution,” in Kendall’s advanced theory of statistics, Bayesian Inference, A. O’Hagan, ed., John Wiley & Sons, New York, 1st ed., 1994. p.293ff.Google Scholar
  9. 9.
    “Mahalonobis distance,” in Principles of Multivariate Analysis, W. Krzanowski, ed., Oxford Science Publications, Oxford, 1988. p.233ff.Google Scholar
  10. 10.
    “Canonical variates,” in Principles of Multivariate Analysis, W. Krzanowski, ed., Oxford Science Publications, Oxford, 1988. p.291ff.Google Scholar
  11. 11.
    “Fisher’s iris data,” in Principles of Multivariate Analysis, W. Krzanowski, ed., Oxford Science Publications, Oxford, 1988. p.45fF.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1998

Authors and Affiliations

  • W. Von Der Linden
    • 1
  • V. Dose
    • 1
  • A. Ramaswami
    • 1
  1. 1.Max-Planck-Institut fűr PlasmaphysikEURATOM AssociationGarching b. MűnchenGermany

Personalised recommendations