Skip to main content

Hopfield-Type Associative Memory with Sparse Modular Networks

  • Conference paper
Neural Information Processing (ICONIP 2014)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 8834))

Included in the following conference series:

Abstract

Modular structures are ubiquitously found in the brain and neural networks. Inspired by the biological networks, we explore Hopfield-type recurrent neural networks with sparse modular connectivity for associative memory. We first show that an iterative learning algorithm, which determines the connection weights depending on the network topology, yields better performance than the one-shot learning rule. We then examine the topological factors which govern the memory capacity of the sparse modular neural network. Numerical results suggest that the uniformity of the number of connections per neuron is an essential condition for good performance. We discuss a method to design an energy-efficient neural network.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bishop, C.M.: Pattern Recognition and Machine Learning, vol. 1. Springer, New York (2006)

    MATH  Google Scholar 

  2. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  3. Misra, J., Saha, I.: Artificial neural networks in hardware: A survey of two decades of progress. Neurocomputing 74(1), 239–255 (2010)

    Article  Google Scholar 

  4. Katayama, Y., Yamane, T., Nakano, D.: An energy-efficient computing approach by filling the connectome gap. In: Ibarra, O.H., Kari, L., Kopecki, S. (eds.) UCNC 2014. LNCS, vol. 8553, pp. 229–241. Springer, Heidelberg (2014)

    Chapter  Google Scholar 

  5. Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79(8), 2554–2558 (1982)

    Article  MathSciNet  Google Scholar 

  6. Davey, N., Hunt, S.P., Adams, R.: High capacity recurrent associative memories. Neurocomputing 62, 459–491 (2004)

    Article  Google Scholar 

  7. Hirose, A.: Complex-Valued Neural Networks. SCI, vol. 32. Springer, Heidelberg (2006)

    MATH  Google Scholar 

  8. Tanaka, G., Aihara, K.: Complex-valued multistate associative memory with nonlinear multilevel functions for gray-level image reconstruction. IEEE Trans. Neural Networks 20(9), 1463–1473 (2009)

    Article  Google Scholar 

  9. McEliece, R.J., Posner, E.C., Rodemich, E.R., Venkatesh, S.S.: The capacity of the Hopfield associative memory. IEEE Transactions on Information Theory 33(4), 461–482 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  10. Beiu, V., Madappuram, B.A.M., Kelly, P.M., McDaid, L.J.: On two-layer brain-inspired hierarchical topologies – A rent’s rule approach –. In: Stenström, P. (ed.) Transactions on HiPEAC IV. LNCS, vol. 6760, pp. 311–333. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  11. Sompolinsky, H.: Neural networks with nonlinear synapses and a static noise. Physical Review A 34(3), 2571 (1986)

    Article  Google Scholar 

  12. Derrida, B., Gardner, E., Zippelius, A.: An exactly solvable asymmetric neural network model. Europhys. Lett. 4(2), 167 (1987)

    Article  Google Scholar 

  13. Treves, A., Amit, D.J.: Metastable states in asymmetrically diluted hopfield networks. Journal of Physics A: Mathematical and General 21(14), 3155 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  14. Gardner, E.: Optimal basins of attraction in randomly sparse neural network models. Journal of Physics A: Mathematical and General 22(12), 1969 (1989)

    Article  MathSciNet  Google Scholar 

  15. Bohland, J.W., Minai, A.A.: Efficient associative memory using small-world architecture. Neurocomputing 38, 489–496 (2001)

    Article  Google Scholar 

  16. Stauffer, D., Aharony, A., da Fontoura Costa, L., Adler, J.: Efficient hopfield pattern recognition on a scale-free neural network. The European Physical Journal B-Condensed Matter and Complex Systems 32(3), 395–399 (2003)

    Article  Google Scholar 

  17. McGraw, P.N., Menzinger, M.: Topology and computational performance of attractor neural networks. Physical Review E 68(4), 047102 (2003)

    Google Scholar 

  18. Torres, J.J., Munoz, M.A., Marro, J., Garrido, P.: Influence of topology on the performance of a neural network. Neurocomputing 58, 229–234 (2004)

    Article  Google Scholar 

  19. Kim, B.J.: Performance of networks of artificial neurons: The role of clustering. Physical Review E 69(4), 045101 (2004)

    Google Scholar 

  20. Hebb, D.O.: The organization of behavior: A neuropsychological theory. Psychology Press (2005)

    Google Scholar 

  21. Scannell, J.W., Blakemore, C., Young, M.P.: Analysis of connectivity in the cat cerebral cortex. The Journal of Neuroscience 15(2), 1463–1483 (1995)

    Google Scholar 

  22. Rubinov, M., Sporns, O.: Complex network measures of brain connectivity: uses and interpretations. Neuroimage 52(3), 1059–1069 (2010)

    Article  Google Scholar 

  23. Hagmann, P., Cammoun, L., Gigandet, X., Meuli, R., Honey, C.J., Wedeen, V.J., Sporns, O.: Mapping the structural core of human cerebral cortex. PLoS Biology 6(7), e159 (2008)

    Google Scholar 

  24. Bullmore, E., Sporns, O.: Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience 10(3), 186–198 (2009)

    Article  Google Scholar 

  25. Varshney, L.R., Chen, B.L., Paniagua, E., Hall, D.H., Chklovskii, D.B.: Structural properties of the caenorhabditis elegans neuronal network. PLoS Computational Biology 7(2), e1001066 (2011)

    Google Scholar 

  26. Clune, J., Mouret, J.B., Lipson, H.: The evolutionary origins of modularity. Proceedings of the Royal Society B: Biological Sciences 280(1755), 20122863 (2013)

    Article  Google Scholar 

  27. Diederich, S., Opper, M.: Learning of correlated patterns in spin-glass networks by local learning rules. Physical Review Letters 58, 949–952 (1987)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Tanaka, G., Yamane, T., Nakano, D., Nakane, R., Katayama, Y. (2014). Hopfield-Type Associative Memory with Sparse Modular Networks. In: Loo, C.K., Yap, K.S., Wong, K.W., Teoh, A., Huang, K. (eds) Neural Information Processing. ICONIP 2014. Lecture Notes in Computer Science, vol 8834. Springer, Cham. https://doi.org/10.1007/978-3-319-12637-1_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-12637-1_32

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-12636-4

  • Online ISBN: 978-3-319-12637-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics