Advertisement

Self-Organization and Formal Neurons

  • Igor Grabec
  • Wolfgang Sachse
Part of the Springer Series in Synergetics book series (SSSYN, volume 68)

Abstract

The ultimate goal of our study is to obtain suggestions for the development of devices capable of the automatic modeling of natural phenomena. It is therefore of fundamental importance to develop a theoretical basis for the description of their optimal performance. In the previous chapters it was stated that an empirical modeling of natural phenomena includes three main tasks: Estimation, storage, and application of a probability distribution. Each of these tasks can be optimized by using the methods described in the previous chapter. The aim of this section is to present the problems and the solutions related with an optimal storage of empirical information about a continuous probability distribution in a system comprised of a finite number of discrete memory units. Such a system can be considered as a basic building block of an automatic modeler of natural phenomena. For example, we can imagine the brain of a biological organism or the digital memory of a computer that continually obtains signals from its surroundings and it optimally stores the corresponding empirical information. The first problem is the estimation of the probability density function of a continuous variable from the empirical data. We have already seen that this can be solved using Parzen’s window. [2] The second problem is the storage of the continuous probability density in a discrete system. This task is generally related with a loss of information and thus the question arises, how one can minimize this loss.

Keywords

Window Function Excited Neuron Reference Vector Discrete Random Variable Memory Unit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Neurocomputing, Foundations of Research, ed. by J. A. Anderson, E. Rosenfeld (MIT Press, Cambridge, MA 1988), p. 209 and 509Google Scholar
  2. 2.
    R. O. Duda, P. E. Hart: Pattern Classifìcation and Scene Analysis (John Wiley and Sons, New York 1973) Ch. 4zbMATHGoogle Scholar
  3. 3.
    J.W. Gibbs: Elementary Principles in Statistical Mechanics (Yale University Press, New Haven, Conn. 1902 )zbMATHGoogle Scholar
  4. 4.
    I. Grabec: “Self-Organization Based on the Second Maximum-Entropy Principle”, 1st IEE Conference on “Artificial Neural Networks”, London 1989, Conf. Publication No. 313, pp. 12–16Google Scholar
  5. 5.
    I. Grabec, W. Sachse: Automatic Modeling of Physical Phenomena: Application to Ultrasonic Data, J. Appl. Phys., 69 (9), 6233–6244 (1991)ADSCrossRefGoogle Scholar
  6. 6.
    I. Grabec: Self-Organization of Neurons Described by the Maximum-Entropy Principle, Biol. Cyb., 63, 403–409 (1990)zbMATHCrossRefGoogle Scholar
  7. 7.
    I. Grabec: Modeling of Chaos by a Self-Organizing Neural Network, in Artifìcial Neural Networks, Proc. ICANN, Espoo, Finland, ed. by T. Kohonen, K. Mäkisar a, O. Simula, J. Kangas (Elsevier Science Publishers B. V., North- Holland, V., 1991 ), Vol. 1, pp. 151–156Google Scholar
  8. 8.
    S. Grossberg: Nonlinear Neural Networks: Principles, Mechanisms, and Architectures, Neural Networks, 1, 17–61 (1988)CrossRefGoogle Scholar
  9. 9.
    H. Haken: Synergetics, An Introduction, Springer Series in Synergetics, Vol. 1 ( Springer, Berlin 1983 )Google Scholar
  10. 10.
    H. Haken: Information and Self-Organization, A Macroscopic Approach to Complex Systems, Springer Series in Synergetics, Vol. 40 ( Springer, Berlin 1988 )Google Scholar
  11. 11.
    D. O. Hebb: The Organization of Behavior, A Neurophysiologies Theory ( John Wiley, New York 1948 )Google Scholar
  12. 12.
    E. T. Jaynes: The Maximum-Entropy Formalism, ed. by R.D. Levine, M. Tribus ( MIT Press, Cambridge, MA 1978 )Google Scholar
  13. 13.
    J. N. Kapur: Maximum-Entropy Models in Science and Engineering ( John Wiley and Sons, New York 1989 )zbMATHGoogle Scholar
  14. 14.
    T. Kohonen: Self-Organized Formation of Topologically Correct Feature Maps, Biol. Cybernetics, 43, 59–69 (1982)MathSciNetzbMATHCrossRefGoogle Scholar
  15. 15.
    T. Kohonen: An Introduction to Neural Computing, Neural Networks, 1, 3–16 (1988)CrossRefGoogle Scholar
  16. 16.
    T. Kohonen: Self-Organization and Associative Memory ( Springer, Berlin 1989 )Google Scholar
  17. 17.
    M. Kokol: Modeling of Natural Phenomena by a Self-Organizing Regression Neural Network, MSc Disertation ( Faculty of Natural Sciences and Technology, University of Ljubljana 1993 )Google Scholar
  18. 18.
    M. Kokol, I. Grabec: “Training of Elliptical Basis Function NN”, World Congress on Neural Networks, San Diego, CA 1994Google Scholar
  19. 19.
    R. Linsker: Self-Organization in a Perceptual Network, Computer, 21, (3), 105–117 (1898)CrossRefGoogle Scholar
  20. 20.
    Chr. von der Malsburg: Self-Organization of Orientation Sensitive Cells in the Striate Cortex, Kybernetik, 14, 85–100 (1973)Google Scholar
  21. 21.
    Maximum Entropy and Bayesian Methods in Inverse Problems, ed. by C. R. Smith, W.T Grandy, Jr. (Reidel, Dordrecht 1985)Google Scholar
  22. 22.
    D. E. Rumelhart, J. McClelland, and The PDP Research Group: Parallel Distributed Processing, ( MIT Press, Cambridge, MA 1986 )Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Igor Grabec
    • 1
  • Wolfgang Sachse
    • 2
  1. 1.Faculty of Mechanical EngineeringUniversity of LjubljanaLjubljanaSlovenia
  2. 2.Theoretical and Applied MechanicsCornell UniversityIthacaUSA

Personalised recommendations