Advertisement

Bayesian Probabilistic Neural Network (BPNN)

  • Walker H. LandJr.
  • J. David Schaffer
Chapter
  • 335 Downloads

Abstract

The purpose of this chapter is to introduce a different way of implementing Bayes theorem using a distributed, parallel algorithm first introduced by Specht (1990), which he named the probabilistic neural network (PNN). We have discussed in Chap.  6 the difficulties in constructing Bayesian networks using both the data-driven and expert-driven approaches. A significant advantage of the Bayesian PNN (BPNN) is that the node edge architecture is theoretically predetermined by the Parzen (1962)-Cacoullos (1966) theoretical formulation.

Specifically, this chapter covers the following topics.
  • It develops the mathematical formulation for the PNN.

  • It demonstrates that the normal PNN can be configured as an optimal Bayesian classifier (BPNN).

  • It shows how Parzen’s theorem maps into Cacoullos’s theorem.

  • It provides an illustrative toy example, showing a BPNN analysis for two classes and nine samples (four benign and five malignant) for a twofold cross-validation analysis.

  • It shows how to develop the optimal standard or variance sigma value for the Gaussian density function (a significant problem) and discusses PNN training methods.

  • It provides a BPNN application to the Alzheimer’s speech data.

Keywords

Bayesian probabilistic neural network Kernel probability estimation Alternative kernel functions Alzheimer’s speech data 

Abbreviations

AUC

Area under the ROC curve

BPNN

Bayesian probabilistic neural network

GA

Genetic algorithm

GRNN

Generalized regression neural network

pdf

probability density function

PNN

Probabilistic neural network

ROC

Receiver operator characteristic

SVM

Support vector machine

References

  1. Cacoullos T (1966) Estimation of a multivariate density. Ann Inst Stat Math 18(2):179–189MathSciNetCrossRefGoogle Scholar
  2. Georgiou VL, Maloflsi SN, Alavizus L, Vrahatis MN (2006) Evolutionary Bayesian probabilistic neural networks. In: International Conference on Numerical Analysis and Applied Mathematics (ICNAAM 2006), pp 393–396Google Scholar
  3. Land WH Jr, Margolis D, Kallergi M, Heine JJ (2010) A kernel approach for ensemble decision combinations with two-view mammography applications. Int J Funct Inform Personal Med 3(2):157–182Google Scholar
  4. Land WH, Masters T, Lo JY (2000) Performance evaluation using the GRNN Oracle and a new evolutionary programming/adaptive boosting hybrid for breast cancer benign/malignant diagnostic aids, ANNIEGoogle Scholar
  5. Masters T (1995) Advanced algorithms for neural networks: a C++ source book. Wiley, New YorkGoogle Scholar
  6. Parzen E (1962) On estimation of a probability density function and mode. Ann Math Stat 33(3):1065–1076MathSciNetCrossRefGoogle Scholar
  7. Specht D (1990) Probabilistic neural networks. Neural Netw 3:109–118CrossRefGoogle Scholar
  8. Xu R, Wumsch DC II (2009) Clustering. Wiley, New York. ISBN 978-0-470-27680-8Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Walker H. LandJr.
    • 1
  • J. David Schaffer
    • 2
  1. 1.Retired Emeritus Research ProfessorBinghamton UniversityBowieUSA
  2. 2.Binghamton UniversityBinghamtonUSA

Personalised recommendations