Minimum Square-Error Modeling of the Probability Density Function

  • M. Kokol
  • I. Grabec
Conference paper


Training of normalized radial basis function neural networks can be considered as a probability density function estimation of the experimental data. A new unsuper-vised method of probability density function estimation is proposed. The method is applied to a multivariate Gaussian mixture model. Batch-mode learning equations are derived and some simple examples are given. Training method is called a minimum square-error modeling of the probability density function. It is similar to the maximum-likelihood method but is numerically less demanding.


Mixture Model Elapse Time Gaussian Mixture Model Radial Basis Function Neural Network Probability Density Function Estimation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Grabec I., Sachse W.: Synergetics of Measurements, Prediction and Control. Springer-Verlag, Heidelberg, 1997.CrossRefGoogle Scholar
  2. [2]
    Duda R.O., Hart P.E.: Pattern Classification and Scene Analysis. Academic Press, New York, 1973.MATHGoogle Scholar
  3. [3]
    Scott D.W.: Multivariate Density Estimation. John Wiley & Sons, New York, 1992.MATHCrossRefGoogle Scholar
  4. [4]
    Bishop C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford, 1996.MATHGoogle Scholar
  5. [5]
    Moody J., Darken C.J.: Fast Learning in Networks of Locally-Tuned Processing Units. Neural Computation, vol. 1, 281–294, 1989.CrossRefGoogle Scholar
  6. [6]
    Stokbro K., Umberger D.K., Hertz J.A.: Exploiting neurons with localized receptive fields to learn chaos. Complex Systems, vol. 4, 603–622, 1990.MATHGoogle Scholar
  7. [7]
    Grabec I.: Modeling of chaos by self-organizing neural network. Published in Kohonen T., Mäkisara K., Simula O., Kangas J. (eds.): Proceedings of International Conference on Artificial Neural Networks, Espoo, Finland, Elsevier Science Publishers, North-Holland, Amsterdam, 151–156, 1991.Google Scholar
  8. [8]
    Schioler H., Hartman U.: Mapping neural network derived from the Parzen window estimator. Neural Networks, vol. 5, 903–909, 1992.CrossRefGoogle Scholar
  9. [9]
    Ripley B.D.: Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, 1996.MATHGoogle Scholar
  10. [10]
    Cherkassky V., Mulier F.: Learning from Data. John Wiley & Sons, New York, 1998.MATHGoogle Scholar
  11. [11]
    Devijver P.A., Kittler J.: Pattern Recognition — A Statistical Approach. Prentice Hall International, Englewood Cliffs, New Jersey, 1982.MATHGoogle Scholar
  12. [12]
    Press W.H., Teukolsky S.A., Veterling W.T., Flannery B.P.: Numerical Recipes in C. 2nd edition, Cambridge University Press, Cambridge, New York, 1995.Google Scholar
  13. [13]
    Grabec I.: Self-organization of neurons described by the maximum-entropy principle. Biological Cybernetics, vol. 63, 403–409, 1990.MATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Wien 1999

Authors and Affiliations

  • M. Kokol
    • 1
  • I. Grabec
    • 1
  1. 1.Faculty of Mechanical EngineeringUniversity of LjubljanaLjubljanaSlovenia

Personalised recommendations