A dynamical learning process for the recognition of correlated patterns in symmetric spin glass models

  • U. Krey
  • G. Pöppel
Conference paper
Part of the Lecture Notes in Physics book series (LNP, volume 314)


In the framework of spin-glass models with symmetric ( multi-spin ) interactions of even order a local dynamical learning process is studied, by which the energy landscape is modified systematically in such a way that even strongly correlated noisy patterns can be recognized. Additionally the basins of attraction of the patterns can be systematically enlarged by performing the learning process with noisy patterns. After completion of the learning process the system typically recognizes for two-spin interactions as many patterns as there are neurons ( p ≃ Nm−1 for m-spin interactions ), and for small systems even more ( p > N for m = 2 ).

The dependence of the learning time on the parameters of the system ( e.g. the average correlation, the noise level, and the number p of patterns ) is studied and it is found that in the case of random patterns for p < N the learning time increases with p as px, with x ≃ 3.5, whereas for p > N the increase is much more drastic. Finally we give a proof for the convergence of the process and also discuss the possibility of a drastic improvement of the learning capacity for patterns with particular correlations ( “patched systems” ).


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    See e.g. the papers of AMIT D. J.; SOMPOLINSKY H.; KINZEL W.; HERTZ J. A., GRINSTEIN G. and SOLLA S. A.; VAN HEMMEN J. L.; TOULOUSE G. in: “Heidelberg Colloquium on Glassy Dynamics”; VAN HEMMEN J. L. and MORGENSTERN I., eds; Lecture Notes in Physics, Vol. 275; Springer Verlag, Heidelberg 1987.Google Scholar
  2. [2]
    BLOCK H. D., Rev. of mod. Phys. 34 (1962), 123.Google Scholar
  3. [3]
    MINSKY M. L. and PAPERT S., Perceptrons, MIT Press (1969).Google Scholar
  4. [4]
    BINDER, K. (Editor), Monte Carlo Methods in Statistical Physics, 2nd Edition, Berlin-Heidelberg-New York Springer Verlag (1986).Google Scholar
  5. [5]
    BRAY A. and MOORE M.A., J. Phys. C, 13 (1980), L–469.Google Scholar
  6. [6]
    GARDNER E., STROUD N. and WALLACE D. J., Edinburgh preprint 87/394 (submitted to Phys. Rev. Lett.).Google Scholar
  7. [7]
    DIEDERICH S. and OPPER M., Phys. Rev. Lett. 58 (1987) 949PubMedGoogle Scholar
  8. [8]
    KRAUTH W. and MEZARD M., J. Phys. A, 20 (1987) L–745Google Scholar
  9. [9]
    PERSONNAZ L., GUYON I., and DREYFUS G., J. Phys. (Paris) Lett., 46 (1985) L–359.Google Scholar
  10. [10]
    BALDI P. and VENKATESH S. S., Phys. Rev. Lett. 58 (1987) 913.PubMedGoogle Scholar
  11. [11]
    KOMLÓS J., Studia Scientiarum Mathematicum Hungarica 2 (1967) 7.Google Scholar
  12. [12]
    VENKATESH S. S. and PSALTIS D., Linear and logarithmic capacities in associative neural networks, preprint IEEE:IT Rev. 4/24/87.Google Scholar
  13. [13]
    KANTER I. and SOMPOLINKSKY H., Phys. Rev. A, 35 (1987) 380.PubMedGoogle Scholar

Copyright information

© Springer-Verlag 1988

Authors and Affiliations

  • U. Krey
    • 1
  • G. Pöppel
    • 1
  1. 1.Institut für Physik III der Universität RegensburgRegensburgF.R.G.

Personalised recommendations