Advertisement

A Stochastic EM Learning Algorithm for Structured Probabilistic Neural Networks

  • Gerhard Paaß
Part of the Informatik-Fachberichte book series (INFORMATIK, volume 252)

Abstract

The EM-algorithm is a general procedure to get maximum likelihood estimates if part of the observations on the variables of a network are missing. In this paper a stochastic version of the algorithm is adapted to probabilistic neural networks describing the associative dependency of variables. These networks have a probability distribution, which is a special case of the distribution generated by probabilistic inference networks. Hence both types of networks can be combined allowing to integrate probabilistic rules as well as unspecified associations in a sound way. The resulting network may have a number of interesting features including cycles of probabilistic rules, and hidden ‘unobservable’ variables.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Anderson, J.A., Rosenfeld, E. (1988) Neurocomputing: Foundations of Research. MIT Press, Cambridge, Ma.Google Scholar
  2. [2]
    Aarts, E., Korst, J. (1988): Simulated Annealing and Boltzmann Machines. Wiley, ChichesterGoogle Scholar
  3. [3]
    Besag, J. (1974): Spatial Interaaction and Statistical Analysis of Lattice Systems. Journal of The Royal Statistical Society, Series B., p. 192–236Google Scholar
  4. [4]
    Celeux, G., Diebolt, J. (1988): A Random Imputation Principle: The Stochastic EM Algorithm. Tech. Rep. No.901, INRIA, 78153 Le Chesnay, FranceGoogle Scholar
  5. [5]
    Darroch, J.N., Ratcliff, D. (1972): Generalized Iterative Scaling for Log-Linear Models. The Annals of Mathematical Statistics, Vol. 43, p. 1470–1480MathSciNetzbMATHCrossRefGoogle Scholar
  6. [6]
    Dempster, A.P., Laird, N.M., Rubin, D.B. (1977): Maximum Likelihood from Incomplete Data via the EM algorithm (with discussion). Journal of the Royal Statistical Society, Vol.B-39, p. 1–38Google Scholar
  7. [7]
    Ackley, D.,.Hinton, G.E., Sejnowski, T.J. (1985): A Learning Algorithm for the Boltzmann machine. Cognitive ScienceVol. 9 pp. 147–169CrossRefGoogle Scholar
  8. [8]
    Kindermann, R., Snell, J.L. (1980): Markov Random Fields and their Applications. American Math. Society, Providence, R.I.Google Scholar
  9. [9]
    Paass, G. (1988): Probabilistic Logic. In: Smets, P., A. Mamdani, D. Dubois, H. Prade (eds.) Non-Standard Logics for Automated Reasoning, Academic press, London, p. 213–252Google Scholar
  10. [10]
    Paass, G. (1989): Structured Probabilistic Neural Networks. Proc. Neuro-Nimes ‘89p. 345–359Google Scholar
  11. [11]
    Pearl, J. (1988): Probabilistic Reasoning in Intelligent Systems, Morgan Kaufmann, San Mateo, Cal.Google Scholar
  12. [12]
    White, H. (1989): Some Asymptotic Results for learning in Single Layer Feedforward Network Models. J. American Statistical Assoc. Vol. 84, p. 1003–1013.zbMATHCrossRefGoogle Scholar
  13. [13]
    Wu, C.F. (1983): On the Convergence properties of the EM algorithm. Annals of Statistics Vol.11, p. 95–103.MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1990

Authors and Affiliations

  • Gerhard Paaß
    • 1
  1. 1.GMDSankt AugustinGermany

Personalised recommendations