Skip to main content

An Approach to Design Growing Echo State Networks

  • Conference paper
  • First Online:
Book cover Intelligent Data Engineering and Automated Learning – IDEAL 2016 (IDEAL 2016)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 9937))

Abstract

Echo State Networks (ESNs) have attracted wide attention for their superior performance in time series prediction. However, it is difficult to design an ESN to match with the given application. In this paper, an approach is proposed to design growing echo state networks. The basic idea of the proposed method is to design a growing reservoir with multiple sub-reservoirs by adding hidden units to the network group by group. First, several subservoirs are synchronously constructed by using the singular value decomposition. Then, every subservoir is evaluated and the best one is selected to be added to the network. Finally, two time series are used to validate the proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    According to reference [20], the column vectors of H can be made full-rank with probability one.

  2. 2.

    Size, sparsity and spectral radius of reservoirs are optimized by the conventional grid search method.

References

  1. Schafer, A.M., Zimmermann, H.G.: Recurrent neural networks are universal approximators. Int. J. Neural Syst. 17(04), 253–263 (2007)

    Article  Google Scholar 

  2. Jaeger, H.: Tutorial on training recurrent neural networks, covering BPTT, RTRL, EKF and echo state network approach. Tech. report 159, German National Research Center for Information Technology, St. Augustin, Germany (2002)

    Google Scholar 

  3. Doya, K.: Recurrent networks: learning algorithms. In: Arbib, M.A. (ed.) The Handbook of Brain Theory and Neural Networks, pp. 955–960. MIT Press, Cambridge (2003)

    Google Scholar 

  4. Sutskever, I.: Training recurrent neural networks. Dissertation, University of Toronto (2013)

    Google Scholar 

  5. Li, D.C., Han, M., Wang, J.: Chaotic time series prediction based on a novel robust echo state network. IEEE Trans. Neural Netw. Learn. Syst. 23(5), 787–799 (2012)

    Article  MathSciNet  Google Scholar 

  6. Zhang, B., Miller, D.J., Wang, Y.: Nonlinear system modeling with random matrices: echo state networks revisited. IEEE Trans. Neural Netw. Learn. Syst. 23(1), 175–182 (2012)

    Article  MathSciNet  Google Scholar 

  7. Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)

    Article  Google Scholar 

  8. Jaeger, H.: The echo state approach to analyzing and training recurrent neural networks. Technical report 148, German National Research Center for Information Technology, St. Augustin, Germany (2001)

    Google Scholar 

  9. Rao, J.S.: Optimization. In: Rao, J.S. (ed.) History of Rotating Machinery Dynamics. HMMS, vol. 20, pp. 341–351. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  10. Pan, Y.P., Wang, J.: Model predictive control of unknown nonlinear dynamical systems based on recurrent neural networks. IEEE Trans. Ind. Electron. 59(8), 3089–3101 (2012)

    Article  Google Scholar 

  11. Skowronski, M.D., Harris, J.G.: Automatic speech recognition using a predictive echo state network classifier. Neural Netw. 20(3), 414–423 (2007)

    Article  MATH  Google Scholar 

  12. Xia, Y., Jelfs, B., Van Hulle, M.M., et al.: An augmented echo state network for nonlinear adaptive filtering of complex noncircular signals. IEEE Trans. Neural Netw. 22(1), 74–83 (2011)

    Article  Google Scholar 

  13. Shi, Z.W., Han, M.: Support vector echo-state machine for chaotic time-series prediction. IEEE Trans. Neural Netw. 18(2), 359–372 (2007)

    Article  Google Scholar 

  14. Lukosevicius, M., Jaeger, H., Schrauwen, B.: Reservoir computing trends. KI-Knstliche Intelligenz 26(4), 365–371 (2012)

    Article  Google Scholar 

  15. LukosEvicIus, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127–149 (2009)

    Article  MATH  Google Scholar 

  16. Strauss, T., Wustlich, W., Labahn, R.: Design strategies for weight matrices of echo state networks. Neural Comput. 24(12), 3246–3276 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  17. Rodan, A., Tino, P.: Minimum complexity echo state network. IEEE Trans. Neural Netw. 22(1), 131–144 (2011)

    Article  Google Scholar 

  18. Deng, Z., Zhang, Y.: Collective behavior of a small-world recurrent neural system with scale-free distribution. IEEE Trans. Neural Netw. 18(5), 1364–1375 (2007)

    Article  Google Scholar 

  19. Xue, Y., Yang, L., Haykin, S.: Decoupled echo state networks with lateral inhibition. Neural Netw. 20(3), 365–376 (2007)

    Article  MATH  Google Scholar 

  20. Qiao, J., Li, F., Han, H., et al.: Growing echo-state network with multiple subreservoirs. IEEE Trans. Neural Netw. Learn. Syst. PP(99), 1–14 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Fan-jun .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Fan-jun, L., Ying, L. (2016). An Approach to Design Growing Echo State Networks. In: Yin, H., et al. Intelligent Data Engineering and Automated Learning – IDEAL 2016. IDEAL 2016. Lecture Notes in Computer Science(), vol 9937. Springer, Cham. https://doi.org/10.1007/978-3-319-46257-8_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46257-8_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46256-1

  • Online ISBN: 978-3-319-46257-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics