L1/2 Norm Regularized Echo State Network for Chaotic Time Series Prediction

  • Meiling Xu
  • Min HanEmail author
  • Shunshoku Kanae
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9949)


Echo state network contains a randomly connected hidden layer and an adaptable output layer. It can overcome the problems associated with the complex computation and local optima. But there may be ill-posed problem when large reservoir state matrix is used to calculate the output weights by least square estimation. In this study, we use L1/2 regularization to calculate the output weights to get a sparse solution in order to solve the ill-posed problem and improve the generalized performance. In addition, an operation of iterated prediction is conducted to test the effectiveness of the proposed L1/2ESN for capturing the dynamics of the chaotic time series. Experimental results illustrate that the predictor has been designed properly. It outperforms other modified ESN models in both sparsity and accuracy.


Echo state networks L1/2 norm regularization Chaotic time series Prediction 



This work was supported by National Natural Science Foundation of China under Grant 61374154.


  1. 1.
    Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)CrossRefGoogle Scholar
  2. 2.
    Lukoševičius, M., Jaeger, H., Schrauwen, B.: Reservoir computing trends. KI-Künstliche Intelligenz 26(4), 365–371 (2012)CrossRefGoogle Scholar
  3. 3.
    Soh, H., Demiris, Y.: Spatio-temporal learning with the online finite and infinite echo-state gaussian processes. IEEE Trans. Neural Netw. Learn. Syst. 26(3), 522–536 (2015)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Yuenyong, S., Nishihara, A.: Evolutionary pre-training for CRJ-type reservoir of echo state networks. Neurocomputing 149, 1324–1329 (2015)CrossRefGoogle Scholar
  5. 5.
    Chatzis, S.P., Demiris, Y.: Echo state gaussian process. IEEE Trans. Neural Networks 22(9), 1435–1445 (2011)CrossRefGoogle Scholar
  6. 6.
    Reinhart, R.F., Steil, J.J.: Regularization and stability in reservoir networks with output feedback. Neurocomputing 90, 96–105 (2012)CrossRefGoogle Scholar
  7. 7.
    Han, M., Ren, W.J., Xu, M.L.: An improved echo state network via L1-norm regularization (in Chinese). Acta Automatica Sin. 40(11), 2428–2435 (2014)zbMATHGoogle Scholar
  8. 8.
    Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. Royal Stat. Soc. Ser. B (Stat. Methodol.) 67(2), 301–320 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Xu, Z.B., Chang, X.Y., Xu, F.M., Zhang, H.: L1/2 regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23(7), 1013–1027 (2012)CrossRefGoogle Scholar
  10. 10.
    Liang, Y., Liu, C., Luan, X.Z., Leung, L.S., Chan, T.M., Xu, Z.B., Zhang, H.: Sparse logistic regression with a L1/2 penalty for gene selection in cancer classification. BMC Bioinform. 14(1), 198 (2013)CrossRefGoogle Scholar
  11. 11.
    Haykin, S.S.: Neural Networks and Learning Machines, 3rd edn., pp. 711–722. Pearson Education, Prentice Hall, Upper Saddle River (2009)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Faculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianChina
  2. 2.Department of Electrical, Electronic and Computer EngineeringFukui University of TechnologyFukuiJapan

Personalised recommendations