Skip to main content

A Testbed for Neural-Network Models Capable of Integrating Information in Time

  • Conference paper
Anticipatory Behavior in Adaptive Learning Systems (ABiALS 2006)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4520))

Included in the following conference series:

  • 915 Accesses

Abstract

This paper presents a set of techniques that allow generating a class of testbeds that can be used to test recurrent neural networks’ capabilities of integrating information in time. In particular, the testbeds allow evaluating the capability of such models, and possibly other architectures and algorithms, of (a) categorizing different time series, (b) anticipating future signal levels on the basis of past ones, and (c) functioning robustly with respect to noise and other systematic random variations of the temporal and spatial properties of the input time series. The paper also presents a number of analysis tools that can be used to understand the functioning and organization of the dynamical internal representations that recurrent neural networks develop to acquire the aforementioned capabilities, including periodicity, repetitions, spikes, and levels and rates of change of input signals. The utility of the proposed testbeds is illustrated by testing and studying the capacity of Elman neural networks to predict and categorize different signals in two exemplary tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Amari, S.I.: Dynamics of pattern formation in lateral-inhibition type neural fields. Biological Cybernetics 27, 77–87 (1977)

    Article  MATH  MathSciNet  Google Scholar 

  2. Cecconi, F., Campenní, M.: Recurrent and concurrent neural networks for objects recognition. In: Deved, V. (ed.) Proceedings of the International Conference on Artificial Intelligence and Applications ( IASTED 2006), Innsbruck, Austria, pp. 216–221 IASTED/ACTA Press (2006)

    Google Scholar 

  3. Chakraborty, K., Mehrotra, K., Mohan, C.K., Ranka, S.: Forecasting the behavior of multivariate time series using neural networks. Neural Networks 5, 961–970 (1992)

    Article  Google Scholar 

  4. Chappelier, J.C., Grumbach, A.: Time in neural networks. ACM SIGART Bulletin 5, 3–11 (1994)

    Article  Google Scholar 

  5. Dorffner, G.: Neural networks for time series processing. Neural Network World 6, 447–468 (1996)

    Google Scholar 

  6. Doya, K.: Recurrent networks: learning algorithms. In: Arbib, M.A. (ed.) The Handbook of Brain Theory and Neural Networks, 2nd edn., pp. 955–960. The MIT Press, Cambridge, MA, USA (2003)

    Google Scholar 

  7. Elman, J.L.: Finding structure in time. Cognitive Science 14, 179–211 (1990)

    Article  Google Scholar 

  8. Hellström, T., Holmström, K.: Predicting the stock market. Research and Reports Opuscula ISRN HEV-BIB-OP–26-SE, Department of Mathematics and Physics, Mälardalen University, Västerås, Sweden (1998)

    Google Scholar 

  9. Hochreiter, S., Schmidhuber, J.: Bridging long time lags by weight guessing and “Long Short-Term Memory”. In: Silva, F.L., Principe, J.C., Almeida, L.B. (eds.) Spatiotemporal models in biological and artificial systems. Frontiers in Artificial Intelligence and Applications, vol. 37, pp. 65–72. IOS Press, Amsterdam (1996)

    Google Scholar 

  10. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9, 1735–1780 (1997)

    Article  Google Scholar 

  11. Jaeger, H.: Tutorial on training recurrent neural networks, covering bptt, rtrl, ekf and the “echo state network”. Gesellschaft für Mathematik und Datenverarbeitung Report 159, German National Research Center for Information Technology (2002)

    Google Scholar 

  12. Klapper-Rybicka, M., Schraudolph, N.N., Schmidhuber, J.: Unsupervised learning in LSTM recurrent neural networks. In: Dorffner, G., Bischof, H., Hornik, K. (eds.) ICANN 2001. LNCS, vol. 2130, pp. 684–691. Springer Verlag, Heidelberg (2001)

    Chapter  Google Scholar 

  13. Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation 14, 2531–2560 (2002)

    Article  MATH  Google Scholar 

  14. Mitchinson, B., Pearson, M., Melhuish, C., Prescott, T.J.: A model of sensorimotor coordination in the rat whisker system. In: Nolfi, S., Baldassarre, G., Calabretta, R., Hallam, J.C.T., Marocco, D., Meyer, J.-A., Miglino, O., Parisi, D. (eds.) SAB 2006. LNCS (LNAI), vol. 4095, pp. 77–88. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  15. Nolfi, S., Marocco, D.: Evolving robots able to integrate sensory-motor information over time. Theory in Biosciences 120, 287–310 (2001)

    Google Scholar 

  16. Nolfi, S., Tani, J.: Extracting regularities in space and time through a cascade of prediction networks: The case of a mobile robot navigating in a structured environment. Connection Science 11, 129–152 (1999)

    Article  Google Scholar 

  17. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)

    Article  Google Scholar 

  18. Schöner, G., Kelso, J.A.S.: Dynamic pattern generation in behavioral and neural systems. Science 239, 1513–1520 (1988)

    Article  Google Scholar 

  19. Ulbricht, C., Dorffner, G., Canu, S., Guillemyn, D., Marijuán, G., Olarte, J., Rodríguez, C., Martín, I.: Mechanisms for handling sequences with neural networks. In: Dagli, C.H. (ed.): Intelligent Engineering Systems through Artificial Neural Networks (ANNIE 1992) New York, NY, USA, vol. 2, pp. 273–278 ASME Press (1992)

    Google Scholar 

  20. Williams, R.J., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Computation 1, 270–280 (1989)

    Article  Google Scholar 

  21. Wilson, H.R., Cowan, J.D.: Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal 12, 1–24 (1972)

    Article  Google Scholar 

  22. Ziemke, T., Jirenhedb, D.A., Hesslow, G.: Internal simulation of perception: a minimal neuro-robotic model. Neurocomputing 68, 85–104 (2005)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Martin V. Butz Olivier Sigaud Giovanni Pezzulo Gianluca Baldassarre

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zappacosta, S., Nolfi, S., Baldassarre, G. (2007). A Testbed for Neural-Network Models Capable of Integrating Information in Time. In: Butz, M.V., Sigaud, O., Pezzulo, G., Baldassarre, G. (eds) Anticipatory Behavior in Adaptive Learning Systems. ABiALS 2006. Lecture Notes in Computer Science(), vol 4520. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74262-3_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-74262-3_11

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-74261-6

  • Online ISBN: 978-3-540-74262-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics