Skip to main content

On the Learning of ESN Linear Readouts

  • Conference paper
Book cover Advances in Artificial Intelligence (CAEPIA 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7023))

Included in the following conference series:

Abstract

In the Echo State Networks (ESN) and, more generally, Reservoir Computing paradigms (a recent approach to recurrent neural networks), linear readout weights, i.e., linear output weights, are the only ones actually learned under training. The standard approach for this is SVD–based pseudo–inverse linear regression. Here it will be compared with two well known on–line filters, Least Minimum Squares (LMS) and Recursive Least Squares (RLS). As we shall illustrate, while LMS performance is not satisfactory, RLS can be a good on–line alternative that may deserve further attention.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Atiya, A., Parlos, A.: New Results on Recurrent Network Training: Unifying the Algorithms and Accelerating Convergence. IEEE Transactions on Neural Networks 11, 697–709 (2000)

    Article  Google Scholar 

  2. Cioffi, J., Kailath, T.: Fast, recursive-least-squares transversal filters for adaptive filtering. IEEE Transactions on Acoustics, Speech and Signal Processing 32(2), 304–337 (1984)

    Article  MATH  Google Scholar 

  3. Eweda, E.: Convergence analysis of adaptive filtering algorithms with singular data covariance matrix. IEEE Transactions on Signal Processing 49(2), 334–343 (2001)

    Article  Google Scholar 

  4. Feuer, A., Weinstein, E.: Convergence analysis of LMS filters with uncorrelated Gaussian data. IEEE Transactions on Acoustics, Speech and Signal Processing 33(1), 222–230 (1985)

    Article  Google Scholar 

  5. Glass, L., Mackey, M.: Mackey-Glass equation. Scholarpedia 5(3), 6908 (2010)

    Article  Google Scholar 

  6. Haykin, S.: Adaptive Filter Theory. Prentice Hall, New Jersey (2001)

    MATH  Google Scholar 

  7. Jaeger, H.: Echo state network. Scholarpedia 2(9), 2330 (2007)

    Article  Google Scholar 

  8. Lanzi, P., Loiacono, D., Wilson, S., Goldberg, D.: Prediction update algorithms for XCSF. In: Proceedings of GECCO 2006, pp. 1505–1512 (2006)

    Google Scholar 

  9. Legenstein, R., Maass, W.: Edge of chaos and prediction of computational performance for neural circuit models. Neural Networks 20(3), 323–334 (2007)

    Article  MATH  Google Scholar 

  10. Letellier, C., Rossler, O.: Rossler attractor. Scholarpedia 1(10), 1721 (2006)

    Article  Google Scholar 

  11. Maass, W., Natschläger, T., Markram, H.: Real–Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations. Neural Computation 14(11), 2531–2560 (2002)

    Article  MATH  Google Scholar 

  12. Steil, J.: Backpropagation–Decorrelation: online recurrent learning with O(N) complexity. IEEE Transactions on Neural Networks 2, 843–848 (2004)

    Google Scholar 

  13. Werbos, P.: Backpropagation Through Time: What It Does and How to Do It. Proceedings of the IEEE 78(10), 1550–1560 (1990)

    Article  Google Scholar 

  14. Williams, R., Zipser, D.: A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. Neural Computation 1, 270–280 (1989)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Alaíz, C.M., Dorronsoro, J.R. (2011). On the Learning of ESN Linear Readouts. In: Lozano, J.A., Gámez, J.A., Moreno, J.A. (eds) Advances in Artificial Intelligence. CAEPIA 2011. Lecture Notes in Computer Science(), vol 7023. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25274-7_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-25274-7_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-25273-0

  • Online ISBN: 978-3-642-25274-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics