Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11731))

Included in the following conference series:

Abstract

Deep Echo State Networks (DeepESNs) recently extended the applicability of Reservoir Computing (RC) methods towards the field of deep learning. In this paper we study the impact of constrained reservoir topologies in the architectural design of deep reservoirs, through numerical experiments on several RC benchmarks. The major outcome of our investigation is to show the remarkable effect, in terms of predictive performance gain, achieved by the synergy between a deep reservoir construction and a structured organization of the recurrent units in each layer. Our results also indicate that a particularly advantageous architectural setting is obtained in correspondence of DeepESNs where reservoir units are structured according to a permutation recurrent matrix.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/lucapedrelli/DeepESN.

  2. 2.

    https://it.mathworks.com/matlabcentral/fileexchange/69402-deepesn.

  3. 3.

    https://github.com/gallicch/DeepESN_octave.

  4. 4.

    With the only exception of the case \(L=3\), where the first two layers contained 167 units and the last one contained 166 units.

  5. 5.

    Performance differences between DeepESN with permutation topology and all the other architectures are confirmed by Wilcoxon rank-sum test performed at 1% significance level on all the tasks (with the only exceptions of the comparisons with ESN using chain topology on Laser, and ESN using permutation topology on MG30).

References

  1. Bacciu, D., Bongiorno, A.: Concentric ESN: assessing the effect of modularity in cycle reservoirs. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2018)

    Google Scholar 

  2. Boedecker, J., Obst, O., Mayer, N.M., Asada, M.: Studies on reservoir initialization and dynamics shaping in echo state networks. In: Proceedings of the 17th European Symposium on Artificial Neural Networks (ESANN), pp. 227–232. d-side publi. (2009)

    Google Scholar 

  3. Farkaš, I., Bosák, R., Gergel’, P.: Computational analysis of memory capacity in echo state networks. Neural Netw. 83, 109–120 (2016). https://doi.org/10.1016/j.neunet.2016.07.012

    Article  Google Scholar 

  4. Farmer, J.D.: Chaotic attractors of an infinite-dimensional dynamical system. Physica D 4(3), 366–393 (1982). https://doi.org/10.1016/0167-2789(82)90042-2

    Article  MathSciNet  MATH  Google Scholar 

  5. Gallicchio, C.: Short-term memory of deep RNN. In: Proceedings of the 26th European Symposium on Artificial Neural Networks (ESANN), pp. 633–638 (2018)

    Google Scholar 

  6. Gallicchio, C., Micheli, A.: Echo state property of deep reservoir computing networks. Cogn. Comput. 9(3), 337–350 (2017). https://doi.org/10.1007/s12559-017-9461-9

    Article  Google Scholar 

  7. Gallicchio, C., Micheli, A.: Why layering in RNN? A DeepESN survey. In: Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2018)

    Google Scholar 

  8. Gallicchio, C., Micheli, A., Pedrelli, L.: Deep reservoir computing: a critical experimental analysis. Neurocomputing 268, 87–99 (2017). https://doi.org/10.1016/j.neucom.2016.12.089

    Article  Google Scholar 

  9. Gallicchio, C., Micheli, A., Tiňo, P.: Randomized recurrent neural networks. In: 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2018), pp. 415–424. i6doc.com publication (2018)

    Google Scholar 

  10. Gallicchio, C., Micheli, A.: Deep echo state network (DeepESN): a brief survey. arXiv preprint arXiv:1712.04323 (2017)

  11. Gallicchio, C., Micheli, A.: Deep reservoir neural networks for trees. Inf. Sci. 480, 174–193 (2019). https://doi.org/10.1016/j.ins.2018.12.052

    Article  MathSciNet  Google Scholar 

  12. Gallicchio, C., Micheli, A., Pedrelli, L.: Design of deep echo state networks. Neural Netw. 108, 33–47 (2018). https://doi.org/10.1016/j.neunet.2018.08.002

    Article  Google Scholar 

  13. Gallicchio, C., Micheli, A., Silvestri, L.: Local lyapunov exponents of deep echo state networks. Neurocomputing 298, 34–45 (2018). https://doi.org/10.1016/j.neucom.2017.11.073

    Article  Google Scholar 

  14. Grigoryeva, L., Ortega, J.P.: Echo state networks are universal. Neural Netw. 108, 495–508 (2018). https://doi.org/10.1016/j.neunet.2018.08.025

    Article  Google Scholar 

  15. Hajnal, M.A., Lőrincz, A.: Critical echo state networks. In: Kollias, S.D., Stafylopatis, A., Duch, W., Oja, E. (eds.) ICANN 2006. LNCS, vol. 4131, pp. 658–667. Springer, Heidelberg (2006). https://doi.org/10.1007/11840817_69

    Chapter  Google Scholar 

  16. Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks - with an erratum note. Technical report, GMD - German National Research Institute for Computer Science (2001)

    Google Scholar 

  17. Jaeger, H.: Short term memory in echo state networks. Technical report, German National Research Center for Information Technology (2001)

    Google Scholar 

  18. Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004). https://doi.org/10.1126/science.1091277

    Article  Google Scholar 

  19. Jaeger, H.: Discovering multiscale dynamical features with hierarchical echo state networks. Technical report, Jacobs University Bremen (2007)

    Google Scholar 

  20. Kawai, Y., Park, J., Asada, M.: A small-world topology enhances the echo state property and signal propagation in reservoir computing. Neural Netw. (2019). https://doi.org/10.1016/j.neunet.2019.01.002

    Article  Google Scholar 

  21. Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127–149 (2009). https://doi.org/10.1016/j.cosrev.2009.03.005

    Article  MATH  Google Scholar 

  22. Mackey, M.C., Glass, L.: Oscillation and chaos in physiological control systems. Science 197(4300), 287–289 (1977). https://doi.org/10.1126/science.267326

    Article  MATH  Google Scholar 

  23. Pascanu, R., Gulcehre, C., Cho, K., Bengio, Y.: How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026v5 (2014)

  24. Rodan, A., Tino, P.: Minimum complexity echo state network. IEEE Trans. Neural Networks 22(1), 131–144 (2011). https://doi.org/10.1109/TNN.2010.2089641

    Article  Google Scholar 

  25. Rodan, A., Tiňo, P.: Simple deterministically constructed cycle reservoirs with regular jumps. Neural Comput. 24(7), 1822–1852 (2012). https://doi.org/10.1162/NECO_a_00297

    Article  MathSciNet  Google Scholar 

  26. Strauss, T., Wustlich, W., Labahn, R.: Design strategies for weight matrices of echo state networks. Neural Comput. 24(12), 3246–3276 (2012). https://doi.org/10.1162/NECO_a_00374

    Article  MathSciNet  MATH  Google Scholar 

  27. Triefenbach, F., Jalalvand, A., Schrauwen, B., Martens, J.P.: Phoneme recognition with large hierarchical reservoirs. In: Advances in Neural Information Processing Systems, pp. 2307–2315 (2010)

    Google Scholar 

  28. Verstraeten, D., Schrauwen, B., d’Haene, M., Stroobandt, D.: An experimental unification of reservoir computing methods. Neural Netw. 20(3), 391–403 (2007). https://doi.org/10.1016/j.neunet.2007.04.003

    Article  MATH  Google Scholar 

  29. Weigend, A.S.: Time Series Prediction: Forecasting the Future and Understanding the Past. Routledge, Abingdon (2018)

    Book  Google Scholar 

  30. White, O.L., Lee, D.D., Sompolinsky, H.: Short-term memory in orthogonal neural networks. Phys. Rev. Lett. 92(14), 148102 (2004). https://doi.org/10.1103/PhysRevLett.92.148102

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Claudio Gallicchio .

Editor information

Editors and Affiliations

A Selected Hyper-parameters

A Selected Hyper-parameters

Table 2 reports the DeepESN hyper-parameters selected by model selection for the experiments reported in Sect. 4. The reported values are the following: spectral radius \(\rho \), input scaling \(\omega _{in}\), inter-layer scaling \(\omega _{il}\), and number of layers L. We recall from Sect. 4.1 that the values of \(\rho \) and \(\omega _{il}\) are shared by all the layers. The selected hyper-parametrization for (shallow) ESN, are given in Table 3, where we report the chosen values of \(\rho \) and \(\omega _{in}\). We also recall from Sect. 4.1 that the total number of reservoir units is set to 500 for both DeepESN and ESN. While in the latter case all the 500 units form a single recurrent layer, in the former they are evenly distributed across the layers in the deep reservoir.

Table 2. Selected DeepESN hyper-parameters: spectral radius \(\rho \), input scaling \(\omega _{in}\), inter-layer scaling \(\omega _{il}\), and number of layers L.
Table 3. Selected ESN hyper-parameters: spectral radius \(\rho \) and input scaling \(\omega _{in}\).

Interestingly, from Table 2 we can observe that constrained reservoir topologies in DeepESNs generally tend to show smaller values of the spectral radius and a deeper architecture than basic (i.e., sparse) reservoir settings. Comparing Tables 2 and 3 we also note that the values of spectral radius and input scaling selected for DeepESN and ESN correspond quite well in all the analyzed reservoir settings.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gallicchio, C., Micheli, A. (2019). Reservoir Topology in Deep Echo State Networks. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions. ICANN 2019. Lecture Notes in Computer Science(), vol 11731. Springer, Cham. https://doi.org/10.1007/978-3-030-30493-5_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30493-5_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30492-8

  • Online ISBN: 978-3-030-30493-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics