Improved Center Point Selection for Probabilistic Neural Networks

  • D. R. Wilson
  • T. R. Martinez


Probabilistic neural networks (PNN) typically learn more quickly than many neural network models and have had success on a variety of applications. However, in their basic form, they tend to have a large number of hidden nodes. One common solution to this problem is to keep only a randomly selected subset of the original training data in building the network. This paper presents an algorithm called the reduced probabilistic neural network (RPNN) that seeks to choose a better than random subset of the available instances to use as center points of nodes in the network. The algorithm tends to retain non-noisy border points while removing nodes with instances in regions of the input space that are highly homogeneous. In experiments on 22 datasets, the RPNN had better average generalization accuracy than two other PNN models, while requiring an average of less than one-third the number of nodes.


Radial Basis Function Hide Node Probabilistic Neural Network Training Instance Decision Node 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    J.A. Leonard, M.A. Kramer, and L.H. Ungar. Using radial basis functions to approximate a function and its error bounds. IEEE Transactions on Neural Networks, 3(4):624–627, 1992.CrossRefGoogle Scholar
  2. [2]
    J. MacQueen. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematics, Stistics and Probability, pages 281–297, Berkeley, CA, 1967.Google Scholar
  3. [3]
    C.J. Merz and P.M. Murphy. UCI machine learning databases. Technical report, Irvine, CA: University of California, Department of Information and Computer Science, 1996. Internet: Scholar
  4. [4]
    O.J. Rawlings. Applied Regression Analysis. Wadsworth and Brook/Cole, Pacific Grove, CA, 1988.MATHGoogle Scholar
  5. [5]
    D.E. Rumelhart and J.L. McClelland. Parallel Distributed Processing. MIT Press, 1986.Google Scholar
  6. [6]
    D.F. Specht. Enhancements to probabilistic neural networks. In Proceedings of the International Joint Conference on Neural Networks, volume 1, pages 761–786, 1992.Google Scholar
  7. [7]
    C. Stanfill and D. Waltz. Toward memory-based reasoning. Communications of the ACM, 29, 1986.Google Scholar
  8. [8]
    P.D. Wasserman. Advanced Methods in Neural Computing. Van Nostrand Reinhold, New York, NY, 1996.Google Scholar
  9. [9]
    D.R. Wilson and T.R. Martinez. Heterogeneous radial basis functions. In Proceedings of the International Conference on Neural Networks, volume 2, pages 1263–1267, 1996.Google Scholar
  10. [10]
    D.R. Wilson and T.R. Martinez. Improved heterogeneous distance functions. Journal of Artificial Intelligence, 6(1):1–34, 1997.MathSciNetMATHGoogle Scholar

Copyright information

© Springer-Verlag Wien 1998

Authors and Affiliations

  • D. R. Wilson
    • 1
  • T. R. Martinez
    • 1
  1. 1.Neural Networks and Machine Learning Laboratory, Computer Science DepartmentBrigham Young UniversityProvoUSA

Personalised recommendations