Recall Time in Densely Encoded Hopfield Network: Results from KFS Theory and Computer Simulation

  • A. A. Frolov
  • D. Husek
  • P. Combe
  • V. Snášel
Conference paper


Recall time in Hopfield attractor neural network with parallel dynamics is investigated analytically and by computer simulation. The method of the recall time estimation is based on calculation of overlaps between successive patterns of network dynamics. Recall time is estimated as the time when the overlap reaches the value 1 — Δm where Δm is the minimal increment of overlap for the network of a given size. It is shown, first, that this time actually gives rather accurate estimation for the recall time and, second, that the overlap between successive patterns of network dynamics can be rather accurately estimated by the theory recently developed by [10]. It is shown that recall process has three very different phases: the search of the recalled prototype by large steps with low convergence rate, fast convergence to the attractor in the vicinity of the recalled prototype and again slow convergence to the attractor when it is almost reached. If recall process ends at two first phases then point attractors dominate. If it ends at the third phase then cyclic at tractors of the length 2 dominate. Transition to the third phase can be revealed by computer simulation of networks of extremely large size (up to the number of neurons in the order 105). Special algorithm is used to avoid storing in the computer memory both connection matrix and the set of stored prototypes.


Network Dynamic Associative Memory Convergence Time Successive Pattern Point Attractor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Amari, S. & Maginu, K. (1988). Statistical neurodynamics of associative memory. Neural Networks, 1,63–73.CrossRefGoogle Scholar
  2. [2]
    Amit, D. J. Gutfreund, H. & Sompolinsky, H. (1987). Statistical mechanics of neural networks near saturation. Annal of Physics, 173, 30–67.CrossRefGoogle Scholar
  3. [3]
    Frolov, A. A. Husek, D. & Muraviev, I. P. (1997). Informational capacity and recall quality in sparsely encoded Hopfield-like neural network: Analytical approaches and computer simulation. Neural Networks, 10, 845–855.CrossRefGoogle Scholar
  4. [4]
    Gardner, E. Derrida, B. & Mottishaw, P. (1987). Zero temperature parallel dynamics for infinite range spin glasses and neural networks. J. Physique, 48, 741–755.CrossRefGoogle Scholar
  5. [5]
    Goles-Chacc, E. Fogelman-Soulie, F. & Pellegrin, D. (1985). Decreasing energy functions as a tool for studying threshold networks. Discrete mathematics, 12, 261–277.MathSciNetCrossRefMATHGoogle Scholar
  6. [6]
    Godbeer, G. H. (1988). The computantional complexity of the stable configuration problem for connectionist models. In On the Computational Complexity of Finding Stable State Vector in Connectionist Models (Hopfield Nets). Technical Report 208/88. University of Toronto, Department of Computer Science.Google Scholar
  7. [7]
    Hopfield, J.J. (1982). Neural network and physical systems with emergent collective computational ability. Proceeding of the National Academy of Science USA, 79,2544–2548.MathSciNetCrossRefGoogle Scholar
  8. [8]
    Kohring, G.A, (1990)a. A high-precision study of the Hopfield model in the phase of broken replica symmetry. Journal of Statistical Physics, 59, 1077–1086.CrossRefGoogle Scholar
  9. [9]
    Kohring, G.A. (1990)b. Convergence time and finite size effects in neural networks. Journal of Physics A: Math, Gen., 23, 2237–2241.MathSciNetCrossRefGoogle Scholar
  10. [10]
    Koyama, H. Fujie, N. & Seyama, H, (1999). Results from the Gardner-Derrida-Mottishaw theory of associative memory. Neural Networks, 12, 247–257.CrossRefGoogle Scholar
  11. [11]
    Okada, M. (1996). Notions of associative memory and sparse coding. Neural Networks, 9, 1429–1458.CrossRefMATHGoogle Scholar

Copyright information

© Springer-Verlag Wien 2001

Authors and Affiliations

  • A. A. Frolov
    • 1
  • D. Husek
    • 2
  • P. Combe
    • 3
  • V. Snášel
    • 4
  1. 1.Institute of Higher Nervous Activity and Neurophysiology Russian Academy of SciencesMoscowRussia
  2. 2.Institute of Computer ScienceAcademy of Science of the Czech RepublicPrague 8Czech Republic
  3. 3.University de Provance CNRS PR 7061 France CPTCNRS-Luminy case 907Marseille cedex 09France
  4. 4.Silesian University OstravaOstravaCzech Republic

Personalised recommendations