Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Self-organization of neurons described by the maximum-entropy principle

  • 96 Accesses

  • 48 Citations

Abstract

In the article the maximum-entropy principle and Parzen windows are applied to derive an optimal mapping of a continuous into a descrete random variable. The mapping can be performed by a network of self-organizing information processing units similar to biological neurons. Each neuron is selectively sensitized to one prototype from the sample space of the discrete random variable. The continuous random variable is applied as the input signal exciting the neurons. The response of the network is described by the excitation vector which represents the encoded input signal. Due to the interaction between neurons adaptive changes of prototypes are caused by the excitations. The derived mathematical model explains this interaction in detail; a simplified self-organization rule derived from it corresponds to that of Kohonen. One and two-dimensional examples of self-organization simulated on a computer are shown in the article.

This is a preview of subscription content, log in to check access.

References

  1. Anderson JA, Rosenfeld E (eds) (1988) Neurocomputing, foundations of research. MIT Press, Cambridge Mass

  2. Duda RO, Hart PE (1973). Pattern classification and scene analysis, chap 4. Wiley, New York

  3. Gibbs JW (1902) Elementary principles in statistical mechanics. Yale University Press, New Haven Conn

  4. Grabec I (1989) Self-organization based on the second maximum-entropy principle, 1st IEE conference on “Artificial Neural Networks”. London, Conf. Publication No. 313, pp 12–16

  5. Grabec I, Sachse W (1989) Automatic modeling of physical phenomena: application to ultrasonic data. Materials Science Center Report # 6771. Cornell University, Ithaca NY

  6. Grossberg S (1988) Nonlinear neural networks: principles, mechanisms, and architectures, Neural Networks 1:17–61

  7. Haken H (1988) Information and self-organization. Springer, Berlin Heidelberg New York

  8. Jaynes ET (1978) The maximum entropy formalism. Levine RD, Tribus M (eds). MIT Press, Cambridge Mass

  9. Kohonen T (1982) Self-organized formation of topologically correct feature maps. Biol Cyber 43:59–69 (Preprint also in: Anderson and Rosenfeld (1988))

  10. Kohonen T (1988) An introduction to neural computing. Neural Networks 1:3–16

  11. Kohonen T (1989) Self-organization and associative memory, chap 5. Springer, Berlin Heidelberg New York

  12. Linsker R (1989) Self-organization in a perceptual network. Computer 21:105–117

  13. Malsburg Ch von der (1973) Self-organization of Orientation Sensitive Cells in the Striate Cortex. Kybernetik 14:85–100 (Preprint also in: Anderson and Rosenfeld (1989))

  14. Rumelhart DE, McClelland J, The PDP Group (eds) (1986) Parallel distributed processing. MIT Press, Cambridge Mass

  15. Smith CR, Grandy WT, Jr (eds) (1985) Maximum-entropy and Bayesian Methods in inverse problems. Reidel, Dordrecht

Download references

Author information

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Grabec, I. Self-organization of neurons described by the maximum-entropy principle. Biol. Cybern. 63, 403–409 (1990). https://doi.org/10.1007/BF00202757

Download citation

Keywords

  • Mathematical Model
  • Information Processing
  • Input Signal
  • Processing Unit
  • Sample Space