Abstract
A sequential design of multilayer probabilistic neural networks is considered in the framework of statistical decision-making. Parameters and interconnection structure are optimized layer-by-layer by estimating unknown probability distributions on input space in the form of finite distribution mixtures. The components of mixtures correspond to neurons which perform an information preserving transform between consecutive layers. Simultaneously the entropy of the transformed distribution is minimized. It is argued that in multidimensional spaces and particularly at higher levels of multilayer feedforward neural networks, the output variables of probabilistic neurons tend to be binary. It is shown that the information loss caused by the binary approximation of neurons can be suppressed by increasing the approximation accuracy.
Suported by the Grant No. A2075703 of the Academy of Sciences, by the Grant No. 402/97/1242 of the Czech Grant Agency and by the Grant of the Ministry of Education No. VS 96063 of the Czech Republic.
Chapter PDF
Keywords
- Input Space
- Probabilistic Neural Network
- Finite Mixture
- Posteriori Probability
- Multilayer Neural Network
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Bishop C.M.: Neural Networks for Pattern Recognition. Clarendon Press: Oxford, (1995)
Dempster A.P., Laird N.M., Rubin D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Statist. Soc. B 39 (1977) 1–38
Grim J.: On numerical evaluation of maximum likelihood estimates for finite mixtures of distributions. Kybernetika 18 (1982) 173–190
Grim J.: Multivariate statistical pattern recognition with nonreduced dimensionality. Kybernetika 22 (1986) 142–157
Grim J.: Maximum Likelihood Design of Layered Neural Networks. Proceedings of the 13th International Conference on Pattern Recognition IV, Los Alamitos: IEEE Computer Society Press (1996) 85–89
Grim J.: Design of multilayer neural networks by information preserving transforms. Proceedings of the Third European Congress on System Science, Eds. E. Pessa, M.P. Penna, A. Montesanto, Roma: Edizzioni Kappa (1996) 977–982
Grim J.: Maximum-Likelihood Structuring of Probabilistic Neural Networks. Research Report UTIA, AS CR, No. 1894, (1997)
Haykin S.: Neural Networks: a comprehensive foundation. Morgan Kaufman: San Mateo CA, (1993)
Palm H.Ch.: A new method for generating statistical classifiers assuming linear mixtures of Gaussian densities. Proceedings of the 12th IAPR International Conference on Pattern Recognition, Jerusalem II, Los Alamitos: IEEE Computer Society Press (1994) 483–486
Schlesinger M.I.: Relation between learning and self-learning in pattern recognition (in Russian). Kibernetika (Kiev), No. 2 (1968) 81–88
Specht D.F.: Probabilistic neural networks for classification, mapping or associative memory. Proceeding of the IEEE Int. Conference on Neural Networks, July 1988, I (1988) 525–532
Streit L.R., Luginbuhl T.E.: Maximum likelihood training of probabilistic neural networks. IEEE Trans. on Neural Networks 5 (1994) 764–783
Vajda I., Grim J.: About optimality of probabilistic basis function neural networks. Research Report UTIA, AS CR, No. 1887, (1996)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Grim, J., Pudil, P. (1998). On virtually binary nature of probabilistic neural networks. In: Amin, A., Dori, D., Pudil, P., Freeman, H. (eds) Advances in Pattern Recognition. SSPR /SPR 1998. Lecture Notes in Computer Science, vol 1451. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0033301
Download citation
DOI: https://doi.org/10.1007/BFb0033301
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-64858-1
Online ISBN: 978-3-540-68526-5
eBook Packages: Springer Book Archive