Skip to main content

Tailoring the performance of attractor neural networks

  • Conference paper
  • First Online:
Statistical Mechanics of Neural Networks

Part of the book series: Lecture Notes in Physics ((LNP,volume 368))

Abstract

First, we study the effects of introducing training noise on the retrieval behaviours of dilute attractor neural networks. We found that, in general, training noise enhances associativity, but also reduces the attractor overlap. At a narrow range of storage levels, however, the system exhibits re-entrant retrieval behaviour on increasing training noise.

Secondly, we consider optimization of network performance, and subsequently the storage capacity, in the presence of retrieval noise (temperature). This is achieved by adapting the network to an appropriate training overlap, which is determined self-consistently by the optimal attractor overlap. The maximum storage capacity deviates from the storage capacity of the maximally stable network on increasing temperature, and in the high temperature regime (T ≥ 0.38 for Gaussian noise, the Hebb-rule network yields the maximum storage capacity. Our analysis demonstrates the principles of specialization and adaptation in neural networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Gardner E and Derrida B 1988 J Phys A: Math Gen 21 271

    Google Scholar 

  2. Wong K Y M and Sherrington D 1988 Europhys Lett 7 197

    Google Scholar 

  3. Wong K Y M and Sherrington D 1990 J Phys A: Math Gen 23 L175

    Google Scholar 

  4. Wong K Y M and Sherrington D 1990 “Optimally adapted attractor neural networks in the presence of noise” submitted to J Phys A: Math Gen

    Google Scholar 

  5. Derrida B, Gardner E and Zippelius A 1987 Europhys Lett 4 167

    Google Scholar 

  6. Amit D, Gutfreund H and Sompolinsky H 1987 Ann Phys (NY) 173 30

    Google Scholar 

  7. Amit D, Evans M, Horner H and Wong K Y M 1990 J Phys A: Math Gen in press

    Google Scholar 

  8. Gardner E, Stroud N and Wallace D 1989 J Phys A: Math Gen 22 2019

    Google Scholar 

  9. Rosenblatt F 1962 Principles of Neurodynamics (New York: Spartan)

    Google Scholar 

  10. Minsky M and Papert S 1969 Perceptrons (Cambridge, MA: MIT Press)

    Google Scholar 

  11. Gardner E 1988 J Phys A: Math Gen 21 257

    Google Scholar 

  12. Kepler T B and Abbott L F 1988 J Physique 49 1657

    Google Scholar 

  13. Krauth W, Nadal J-P and Mezard M 1988 J Phys A: Math Gen 21 2995

    Google Scholar 

  14. Gardner E 1989 J Phys A: Math Gen 22 1969

    Google Scholar 

  15. Krauth W and Mezard M 1987 J Phys A: Math Gen 22 2019

    Google Scholar 

  16. Wong K Y M and Sherrington D 1990 “Retrieval properties of noise-optimal attractor neural networks” to be published

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Luis Garrido

Rights and permissions

Reprints and permissions

Copyright information

© 1990 Springer-Verlag

About this paper

Cite this paper

Wong, K.Y.M., Sherrington, D. (1990). Tailoring the performance of attractor neural networks. In: Garrido, L. (eds) Statistical Mechanics of Neural Networks. Lecture Notes in Physics, vol 368. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3540532676_44

Download citation

  • DOI: https://doi.org/10.1007/3540532676_44

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-53267-5

  • Online ISBN: 978-3-540-46808-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics