Advertisement

On virtually binary nature of probabilistic neural networks

  • Jiři Grim
  • Pavel Pudil
Learning Methodologies
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1451)

Abstract

A sequential design of multilayer probabilistic neural networks is considered in the framework of statistical decision-making. Parameters and interconnection structure are optimized layer-by-layer by estimating unknown probability distributions on input space in the form of finite distribution mixtures. The components of mixtures correspond to neurons which perform an information preserving transform between consecutive layers. Simultaneously the entropy of the transformed distribution is minimized. It is argued that in multidimensional spaces and particularly at higher levels of multilayer feedforward neural networks, the output variables of probabilistic neurons tend to be binary. It is shown that the information loss caused by the binary approximation of neurons can be suppressed by increasing the approximation accuracy.

Keywords

Input Space Probabilistic Neural Network Finite Mixture Posteriori Probability Multilayer Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Bishop C.M.: Neural Networks for Pattern Recognition. Clarendon Press: Oxford, (1995)Google Scholar
  2. 2.
    Dempster A.P., Laird N.M., Rubin D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Statist. Soc. B 39 (1977) 1–38Google Scholar
  3. 3.
    Grim J.: On numerical evaluation of maximum likelihood estimates for finite mixtures of distributions. Kybernetika 18 (1982) 173–190Google Scholar
  4. 4.
    Grim J.: Multivariate statistical pattern recognition with nonreduced dimensionality. Kybernetika 22 (1986) 142–157Google Scholar
  5. 5.
    Grim J.: Maximum Likelihood Design of Layered Neural Networks. Proceedings of the 13th International Conference on Pattern Recognition IV, Los Alamitos: IEEE Computer Society Press (1996) 85–89Google Scholar
  6. 6.
    Grim J.: Design of multilayer neural networks by information preserving transforms. Proceedings of the Third European Congress on System Science, Eds. E. Pessa, M.P. Penna, A. Montesanto, Roma: Edizzioni Kappa (1996) 977–982Google Scholar
  7. 7.
    Grim J.: Maximum-Likelihood Structuring of Probabilistic Neural Networks. Research Report UTIA, AS CR, No. 1894, (1997)Google Scholar
  8. 8.
    Haykin S.: Neural Networks: a comprehensive foundation. Morgan Kaufman: San Mateo CA, (1993)Google Scholar
  9. 9.
    Palm H.Ch.: A new method for generating statistical classifiers assuming linear mixtures of Gaussian densities. Proceedings of the 12th IAPR International Conference on Pattern Recognition, Jerusalem II, Los Alamitos: IEEE Computer Society Press (1994) 483–486Google Scholar
  10. 10.
    Schlesinger M.I.: Relation between learning and self-learning in pattern recognition (in Russian). Kibernetika (Kiev), No. 2 (1968) 81–88Google Scholar
  11. 11.
    Specht D.F.: Probabilistic neural networks for classification, mapping or associative memory. Proceeding of the IEEE Int. Conference on Neural Networks, July 1988, I (1988) 525–532Google Scholar
  12. 12.
    Streit L.R., Luginbuhl T.E.: Maximum likelihood training of probabilistic neural networks. IEEE Trans. on Neural Networks 5 (1994) 764–783CrossRefGoogle Scholar
  13. 13.
    Vajda I., Grim J.: About optimality of probabilistic basis function neural networks. Research Report UTIA, AS CR, No. 1887, (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Jiři Grim
    • 1
  • Pavel Pudil
    • 1
  1. 1.Institute of Information Theory and AutomationAcademy of Sciences of the Czech RepublicPrague 8Czech Republic

Personalised recommendations