Skip to main content

Optimised attractor neural networks with external inputs

  • Conference paper
  • First Online:
New Trends in Neural Computation (IWANN 1993)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 686))

Included in the following conference series:

  • 295 Accesses

Abstract

Attractor neural networks resemble the brain in many key aspects, such as their high connectivity, feedback, non-local storage of information and tolerance to damage. The models are also amenable to calculation, using mean field theory, and computer simulation. These methods have enabled properties such as the capacity of the network, the quality of pattern retrieval, and robustness to damage, to be accurately determined. In this paper a biologically motivated input method for external stimuli is studied. A straightforward signal-to-noise calculation gives an indication of the properties of the network. Calculations using mean field theory and including the external stimuli are carried out. A threshold is introduced, the value of which is chosen to optimize the performance of the network, and sparsely coded patterns are considered. The network is shown to have enhanced capacity, improved quality of retrieval, and increased robustness to the random elimination of neurons.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hopfield J. J. (1982), “Neural networks and physical systems with emergent collective computational abilities,” Proc. Natl. Sci., 79, pp. 2554–2558.

    Google Scholar 

  2. Amit D. J. (1989), Modeling Brain Function: The world of attractor neural networks, Cambridge: Cambridge University Press.

    Google Scholar 

  3. Amit D. J., Parisi G. & Nicolis S. (1990), “Neural potentials as stimuli for attractor neural networks,” Network, 1, pp. 75–88.

    Google Scholar 

  4. Engel A., Englisch H. & Schütte A. (1989), “Improved retrieval in neural networks with external fields,” Europhys. Lett., 8, pp. 393–397.

    Google Scholar 

  5. Vicente C. J. P. & Amit D. J. (1989), “Optimised network for sparsely coded patterns,” J. Phys., A22, pp. 559–569.

    Google Scholar 

  6. Amit D. J., Gutfreund H. & Sompolinsky H. (1987), “Information storage in neural networks with low levels of activity,” Phys. Rev., A35, pp. 2293–2303.

    Google Scholar 

  7. Gardner E. (1988), “The phase space of interactions in neural network models,” J. Phys., A21, pp. 257–270.

    Google Scholar 

  8. Amit D. J., Gutfreund H. & Sompolinsky H. (1987), “Statistical mechanics of neural networks near saturation,” Ann. Phys., NY, 173, pp. 30–67.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Joan Cabestany Alberto Prieto

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Burkitt, A.N. (1993). Optimised attractor neural networks with external inputs. In: Mira, J., Cabestany, J., Prieto, A. (eds) New Trends in Neural Computation. IWANN 1993. Lecture Notes in Computer Science, vol 686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-56798-4_142

Download citation

  • DOI: https://doi.org/10.1007/3-540-56798-4_142

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-56798-1

  • Online ISBN: 978-3-540-47741-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics