Abstract
This paper presents a set of techniques that allow generating a class of testbeds that can be used to test recurrent neural networks’ capabilities of integrating information in time. In particular, the testbeds allow evaluating the capability of such models, and possibly other architectures and algorithms, of (a) categorizing different time series, (b) anticipating future signal levels on the basis of past ones, and (c) functioning robustly with respect to noise and other systematic random variations of the temporal and spatial properties of the input time series. The paper also presents a number of analysis tools that can be used to understand the functioning and organization of the dynamical internal representations that recurrent neural networks develop to acquire the aforementioned capabilities, including periodicity, repetitions, spikes, and levels and rates of change of input signals. The utility of the proposed testbeds is illustrated by testing and studying the capacity of Elman neural networks to predict and categorize different signals in two exemplary tasks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Amari, S.I.: Dynamics of pattern formation in lateral-inhibition type neural fields. Biological Cybernetics 27, 77–87 (1977)
Cecconi, F., CampennĂ, M.: Recurrent and concurrent neural networks for objects recognition. In: Deved, V. (ed.) Proceedings of the International Conference on Artificial Intelligence and Applications ( IASTED 2006), Innsbruck, Austria, pp. 216–221 IASTED/ACTA Press (2006)
Chakraborty, K., Mehrotra, K., Mohan, C.K., Ranka, S.: Forecasting the behavior of multivariate time series using neural networks. Neural Networks 5, 961–970 (1992)
Chappelier, J.C., Grumbach, A.: Time in neural networks. ACM SIGART Bulletin 5, 3–11 (1994)
Dorffner, G.: Neural networks for time series processing. Neural Network World 6, 447–468 (1996)
Doya, K.: Recurrent networks: learning algorithms. In: Arbib, M.A. (ed.) The Handbook of Brain Theory and Neural Networks, 2nd edn., pp. 955–960. The MIT Press, Cambridge, MA, USA (2003)
Elman, J.L.: Finding structure in time. Cognitive Science 14, 179–211 (1990)
Hellström, T., Holmström, K.: Predicting the stock market. Research and Reports Opuscula ISRN HEV-BIB-OP–26-SE, Department of Mathematics and Physics, Mälardalen University, Västerås, Sweden (1998)
Hochreiter, S., Schmidhuber, J.: Bridging long time lags by weight guessing and “Long Short-Term Memory”. In: Silva, F.L., Principe, J.C., Almeida, L.B. (eds.) Spatiotemporal models in biological and artificial systems. Frontiers in Artificial Intelligence and Applications, vol. 37, pp. 65–72. IOS Press, Amsterdam (1996)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9, 1735–1780 (1997)
Jaeger, H.: Tutorial on training recurrent neural networks, covering bptt, rtrl, ekf and the “echo state network”. Gesellschaft für Mathematik und Datenverarbeitung Report 159, German National Research Center for Information Technology (2002)
Klapper-Rybicka, M., Schraudolph, N.N., Schmidhuber, J.: Unsupervised learning in LSTM recurrent neural networks. In: Dorffner, G., Bischof, H., Hornik, K. (eds.) ICANN 2001. LNCS, vol. 2130, pp. 684–691. Springer Verlag, Heidelberg (2001)
Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation 14, 2531–2560 (2002)
Mitchinson, B., Pearson, M., Melhuish, C., Prescott, T.J.: A model of sensorimotor coordination in the rat whisker system. In: Nolfi, S., Baldassarre, G., Calabretta, R., Hallam, J.C.T., Marocco, D., Meyer, J.-A., Miglino, O., Parisi, D. (eds.) SAB 2006. LNCS (LNAI), vol. 4095, pp. 77–88. Springer, Heidelberg (2006)
Nolfi, S., Marocco, D.: Evolving robots able to integrate sensory-motor information over time. Theory in Biosciences 120, 287–310 (2001)
Nolfi, S., Tani, J.: Extracting regularities in space and time through a cascade of prediction networks: The case of a mobile robot navigating in a structured environment. Connection Science 11, 129–152 (1999)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)
Schöner, G., Kelso, J.A.S.: Dynamic pattern generation in behavioral and neural systems. Science 239, 1513–1520 (1988)
Ulbricht, C., Dorffner, G., Canu, S., Guillemyn, D., Marijuán, G., Olarte, J., RodrĂguez, C., MartĂn, I.: Mechanisms for handling sequences with neural networks. In: Dagli, C.H. (ed.): Intelligent Engineering Systems through Artificial Neural Networks (ANNIE 1992) New York, NY, USA, vol. 2, pp. 273–278 ASME Press (1992)
Williams, R.J., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Computation 1, 270–280 (1989)
Wilson, H.R., Cowan, J.D.: Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal 12, 1–24 (1972)
Ziemke, T., Jirenhedb, D.A., Hesslow, G.: Internal simulation of perception: a minimal neuro-robotic model. Neurocomputing 68, 85–104 (2005)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zappacosta, S., Nolfi, S., Baldassarre, G. (2007). A Testbed for Neural-Network Models Capable of Integrating Information in Time. In: Butz, M.V., Sigaud, O., Pezzulo, G., Baldassarre, G. (eds) Anticipatory Behavior in Adaptive Learning Systems. ABiALS 2006. Lecture Notes in Computer Science(), vol 4520. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74262-3_11
Download citation
DOI: https://doi.org/10.1007/978-3-540-74262-3_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-74261-6
Online ISBN: 978-3-540-74262-3
eBook Packages: Computer ScienceComputer Science (R0)