Abstract
In the context of learning in attractor neural networks (ANN) we discuss the issue of the constraints imposed by the requirement that the afferents arriving at the neurons in the attractor network from the stimulus, compete successfully with the afferents generated by the recurrent activity inside the network. We simulate and analyze a two component network: one representing the stimulus, the other an ANN. We show that if stimuli are correlated with the receptive fields of neurons in the ANN, and are of sufficient contrast, the stimulus can provide the necessary information to the recurrent network to allow learning new stimuli, even in very disfavored situation of synaptic predominance in the recurrent part.
On leave of absence from Racah Institute of Physics
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Miyashita Y and Chang HS 1988 Nature 331 68, Nature, and ibid 335 817
Amit DJ 1989 Modeling Brain Function (Cambridge University Press, NY)
Treves A and Rolls E T 1992 Hippocampus 2 625
Willshaw D J 1969 Nature, 222 960.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1993 Springer-Verlag London Limited
About this paper
Cite this paper
Amit, D.J., Brunel, N. (1993). Adequate Input for Learning in Attractor Neural Networks. In: Gielen, S., Kappen, B. (eds) ICANN ’93. ICANN 1993. Springer, London. https://doi.org/10.1007/978-1-4471-2063-6_6
Download citation
DOI: https://doi.org/10.1007/978-1-4471-2063-6_6
Published:
Publisher Name: Springer, London
Print ISBN: 978-3-540-19839-0
Online ISBN: 978-1-4471-2063-6
eBook Packages: Springer Book Archive