Skip to main content

Continuous Attractors of Lotka-Volterra Recurrent Neural Networks

  • Conference paper
Book cover Artificial Neural Networks – ICANN 2009 (ICANN 2009)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5768))

Included in the following conference series:

Abstract

Continuous attractor neural network (CANN) models have been studied in conjunction with many diverse brain functions including local cortical processing, working memory, and spatial representation. There is good evidence for continuous stimuli, such as orientation, moving direction, and the spatial location of objects could be encoded as continuous attractors in neural networks. Although their wide applications for the information processing in the brain, representation and stability analysis of continuous attractors in non-linear recurrent neural networks (RNNs) have been reported very little so far. This paper studies the continuous attractors of Lotka-Volterra (LV) recurrent neural networks. Conditions are given to insure the network has continuous attractors. Representation of continuous attractor is obtained under the conditions. Simulations are employed to illustrate the theory.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Amari, S.: Dynamics of pattern formation in lateral-inhibition type neural fields. Biological Cybernetics 27, 77–87 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  2. Ben-Yishai, R., Bar-Or, R.L., Sompolinsky, H.: Theory of orientation tuning in visual cortex. Proc. Nat. Acad. Sci. USA 92, 3844–3848 (1995)

    Article  Google Scholar 

  3. Compte, A., Brunel, N., Goldman-Rakic, P.S., Wang, X.J.: Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cereb. Cortex 10, 910–923 (2000)

    Article  Google Scholar 

  4. Yu, J., Zhang, Y., Zhang, L.: Representation of continuous attractors of recurrent neural networks. IEEE Transactions on Neural Networks 20(2), 368–372 (2009)

    Article  MathSciNet  Google Scholar 

  5. Salinas, E.: Background synaptic activity as a switch between dynamical states in network. Neural computation 15, 1439–1475 (2003)

    Article  MATH  Google Scholar 

  6. Seung, H.S.: How the brain keeps the eyes still. J. Neurobiology 93, 13339–13344 (1996)

    Google Scholar 

  7. Seung, H.S.: Continuous attractors and oculomotor control. Neural Networks 11, 1253–1258 (1998)

    Article  Google Scholar 

  8. Seung, H.S.: Learning continuous attractors in recurrent networks. Adv. Neural Info. Proc. Syst. 10, 654–660 (1998)

    Google Scholar 

  9. Seung, H.S., Lee, D.D.: The manifold ways of perception. Science 290, 2268–2269 (2000)

    Article  Google Scholar 

  10. Samsonovich, A., McNaughton, B.L.: Path integration and congnitive mapping in a continuous attractor neural network model. J. Neurosci. 7, 5900 (1997)

    Google Scholar 

  11. Stringer, S.M., Trppenberg, T.P., Rolls, E., Aranjo, I.: Self organizing continuous attractor networks and path integration: One-dimensional models of head direction cell. Network: Computation in Neural Systems 13, 217–242 (2002)

    Article  Google Scholar 

  12. Trappenberg, T.P., Standage, D.I.: Self-organising continuous attractor networks with multiple activity packets, and the representation of space. Neural Networks 17, 5–27 (2004)

    Article  MATH  Google Scholar 

  13. Tsodyks, M., Sejnowski, T.: Associative memory and hippocampal place cells. International journal of neural systems 6, 81–86 (1995)

    Google Scholar 

  14. Wu, S., Amari, S., Nakahara, H.: Population coding and decoding in a neural fields: a computational study. Neural Computation 14, 999–1026 (2002)

    Article  MATH  Google Scholar 

  15. Wu, S.: Shun-Ichi Amari, Computing with continuous attractors: Stability and online aspects. Neural Computation 17, 2215–2239 (2005)

    Article  MathSciNet  Google Scholar 

  16. Wu, S., Hamaguchi, K., Amari, S.: Dynamics and computation of continuous attractors. Neural Computation 20, 994–1025 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  17. Zhang, K.C.: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A Theory. The journal of neuroscience 16(6), 2110–2126 (1996)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhang, H., Yu, J., Yi, Z. (2009). Continuous Attractors of Lotka-Volterra Recurrent Neural Networks. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds) Artificial Neural Networks – ICANN 2009. ICANN 2009. Lecture Notes in Computer Science, vol 5768. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04274-4_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04274-4_30

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04273-7

  • Online ISBN: 978-3-642-04274-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics