Skip to main content

Learning temporal sequences in recurrent self-organising neural nets

  • Neural Networks
  • Conference paper
  • First Online:
Advanced Topics in Artificial Intelligence (AI 1997)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1342))

Included in the following conference series:

Abstract

The learning of temporal sequences is an extremely important component of human and animal behaviour. As well as the motor control involved in routine behaviour such as walking, running, talking, tool use and so on, humans have an apparently remarkable capacity for learning (and subsequently reproducing) temporal sequences. A new connectionist model of temporal sequence learning is described which is based on recurrent self-organising maps. The model is shown to be both powerful and robust, and to exhibit a strong generalisation effect not found in simple recurrent networks (SRN). The model combines two important developments in artificial neural networks; recursion and self-organising maps (SOM). Both are found in the primate cortex; topological maps appear to be ubiquitous in the cerebral cortex of higher animals, especially in the primary sensory areas, and the neuroanatomy of the cortex also reveals numerous and consistent recurrent linkages between regions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Garry J. Briscoe. Adaptive Behavioural Cognition. PhD thesis, School of Computing, Curtin University of Technology, Perth, Western Australia, 1997.

    Google Scholar 

  2. Axel Cleeremans. Mechanisms of Implicit Learning: Connectionist Models of Sequence Processing. The MIT Press, A Bradford Book; Cambridge, MA, 1993.

    Google Scholar 

  3. Jeffrey L. Elman. Finding Structure in Time. Cognitive Science, 14:179–211, 1990.

    Article  Google Scholar 

  4. Jeffrey L. Elman. Learning and development in neural networks: the importance of starting small. Cognition, 48:71–99, 1993.

    Article  PubMed  Google Scholar 

  5. Michael I. Jordan. Attractor Dynamics and Parallelism in a Connectionist Sequential Machine. In Proceedings of the Eighth Annual Conference of the Cognitive Science Society, pages 112–127, 1986.

    Google Scholar 

  6. Michael I. Jordan. Serial order: A parallel, distributed processing approach. In J.L. Elman and D.E. Rumelhart, editors, Advances in Connectionist Theory: Speech. Hillsdale, NJ: Erlbaum, 1989.

    Google Scholar 

  7. Michael I. Jordan and David A. Rosenbaum. Action. In Michael I. Posner, editor, Foundations of Cognitive Science, chapter 18, pages 727–767. The MIT Press: A Bradford Book: Cambridge, MA, 1989.

    Google Scholar 

  8. Jari Kangas. Time-Delayed Self-Organizing Maps. Proc. IJCNN'Sg Int. Joint Conf. on Neural Networks, 11:331–336, 1989.

    Google Scholar 

  9. K.J. Lang, A.H. Waibel, and G.E. Hinton. A Time-Delay Neural Network Architecture for Isolated Word Recognition. Neural Networks, 3:33–43, 1990. Reproduced in Readings in Machine Learning Shavlik and Dietterich (eds), Morgan Kaufmann, 1990, pages 150–170.

    Article  Google Scholar 

  10. J.C. Scholtes. Recurrent Kohonen Self-Organization in Natural Language Processing. In T. Kohonen, K. Mäkisara, O. Simula, and J. Kangas, editors, Artificial Neural Networks — Volume 2 (Proceedings of the 1991 International Conference on Artificial Neural Networks (ICANN-91), Espoo, Finland, (June 1991), pages 1751–1754. North-Holland, 1991.

    Google Scholar 

  11. Noel E. Sharkey and Amanda J.C. Sharkey. Separating Learning and Representation. In Stefan Wermter, Ellen Riloff, and Gabriele Scheler, editors, Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing, pages 17–32. Springer, 1996.

    Google Scholar 

  12. Harel Z. Shouval and Michael P. Perrone. Post-hebbian learning rules. In Michael A. Arbib, editor, The Handbook of Brain Theory and Neural Networks, pages 745–748. The MIT Press: A Bradford Book: Cambridge, MA, 1995.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Abdul Sattar

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Briscoe, G., Caelli, T. (1997). Learning temporal sequences in recurrent self-organising neural nets. In: Sattar, A. (eds) Advanced Topics in Artificial Intelligence. AI 1997. Lecture Notes in Computer Science, vol 1342. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-63797-4_96

Download citation

  • DOI: https://doi.org/10.1007/3-540-63797-4_96

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63797-4

  • Online ISBN: 978-3-540-69649-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics