Memory Capacity of Input-Driven Echo State Networks at the Edge of Chaos
Reservoir computing provides a promising approach to efficient training of recurrent neural networks, by exploiting the computational properties of the reservoir structure. Various approaches, ranging from suitable initialization to reservoir optimization by training have been proposed. In this paper we take a closer look at short-term memory capacity, introduced by Jaeger in case of echo state networks. Memory capacity has recently been investigated with respect to criticality, the so called edge of chaos, when the network switches from a stable regime to an unstable dynamic regime. We calculate memory capacity of the networks for various input data sets, both random and structured, and show how the data distribution affects the network performance. We also investigate the effect of reservoir sparsity in this context.
Keywordsecho state network memory capacity edge of chaos
Unable to display preview. Download preview PDF.
- 6.Jaeger, H.: Short term memory in echo state networks. Tech. Rep. GMD Report 152, German National Research Center for Information Technology (2002)Google Scholar
- 7.Jaeger, H.: Echo state network. Scholarpedia 2(9) (2007)Google Scholar
- 8.Legenstein, R., Maass, W.: What makes a dynamical system computationally powerful? In: New Directions in Statistical Signal Processing: From Systems to Brain, pp. 127–154. MIT Press (2007)Google Scholar
- 11.Rodan, A., Tiňo, P.: Minimum complexity echo state network. IEEE Transaction on Neural Networks 21(1), 131–144 (2011)Google Scholar
- 12.Schrauwen, B., Buesing, L., Legenstein, R.: On computational power and the order-chaos phase transition in reservoir computing. In: Advances in Neural Information Processing Systems, pp. 1425–1432 (2009)Google Scholar
- 13.Sprott, J.: Chaos and Time-Series Analysis. Oxford University Press (2003)Google Scholar
- 14.Verstraeten, D., Dambre, J., Dutoit, X., Schrauwen, B.: Memory versus non-linearity in reservoirs. In: International Joint Conference on Neural Networks, pp. 1–8 (2010)Google Scholar