Skip to main content

Training Dynamic Neural Networks Using the Extended Kalman Filter for Multi-Step-Ahead Predictions

  • Conference paper
Artificial Neural Networks

Part of the book series: Springer Series in Bio-/Neuroinformatics ((SSBN,volume 4))

  • 4130 Accesses

Abstract

This paper is dedicated to single-step-ahead and multi-step-ahead time series prediction problems. We consider feedforward and recurrent neural network architectures, different derivatives calculation and optimization methods and analyze their advantages and disadvantages. We propose a novel method for training feedforward neural networks with tapped delay lines for better multi-step-ahead predictions. Special mini-batch calculations of derivatives called Forecasted Propagation Through Time for the Extended Kalman Filter training method are introduced. Experiments on well-known benchmark time series are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Haykin, S.: Neural Networks and Learning Machines, 3rd edn., 936 p. Prentice Hall, New York (2009)

    Google Scholar 

  2. Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn., p. 842. Prentice Hall, Englewood Cliffs (1999)

    MATH  Google Scholar 

  3. Bishop, C.M.: Pattern Recognition and Machine Learning, 738 p. Springer (2006)

    Google Scholar 

  4. Giles, L.L., Horne, B.G., Sun-Yan, Y.: A Delay Damage Model Selection Algorithm for NARX Neural Networks. IEEE Transactions on Signal Processing 45(11), 2719–2730 (1997)

    Article  Google Scholar 

  5. Parlos, A.G., Raisa, O.T., Atiya, A.F.: Multi-step-ahead prediction using dynamic recurrent neural networks. Neural Networks 13(7), 765–786 (2000)

    Article  Google Scholar 

  6. Bone, R., Cardot, H.: Advanced Methods for Time Series Prediction Using Recurrent Neural Networks. In: Recurrent Neural Networks for Temporal Data Processing, ch. 2, pp. 15–36. Intech, Croatia (2011)

    Google Scholar 

  7. Qina, S.J., Badgwellb, T.A.: A survey of industrial model predictive control technology. Control Engineering Practice 11(7), 733–764 (2003)

    Article  Google Scholar 

  8. Toth, E., Brath, A.: Multistep ahead streamflow forecasting: Role of calibration data in conceptual and neural network modeling. Water Resources Research 43(11) (2007), doi:10.1029/2006WR005383

    Google Scholar 

  9. Prokhorov, D.V.: Toyota Prius HEV Neurocontrol and Diagnostics. Neural Networks (21), 458–465 (2008)

    Google Scholar 

  10. Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Technical ReportGMDReport 148, German National Research Center for Information Technology (2001)

    Google Scholar 

  11. Anderson, J.A., Rosenfeld, E. (eds.): Talking nets: An oral history of neural networks, p. 54. MIT Press, Cambridge (1998)

    Google Scholar 

  12. Hava, T., Siegelmann, B.G., Horne, C.: Computational capabilities of recurrent NARX neural networks. IEEE Transactions on Systems, Man, and Cybernetics, Part B 27(2), 208–215 (1997)

    Google Scholar 

  13. Newton, H.J., North, G.R.: Forecasting global ice volume. J. Time Series Analysis 1991(12), 255–265 (1991)

    Article  Google Scholar 

  14. Hyndman, R.J.: Time Series Data Library, http://data.is/TSDLdemo

  15. Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient scent is difficult. IEEE Trans. Neural Networks 5(2), 157–166 (1994)

    Article  Google Scholar 

  16. Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J.: Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In: A Field Guide to Dynamical Recurrent Neural Networks, 421 p. IEEE Press (2001)

    Google Scholar 

  17. Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T., Kingsbury, B.: Deep Neural Networks for Acoustic Modeling in Speech Recognition. IEEE Signal Processing Magazine 29(6), 82–97 (2012)

    Article  Google Scholar 

  18. Singhal, S., Wu, L.: Training Multilayer Perceptrons with the Extended Kalman algorithm. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems 1, pp. 133–140. Morgan Kaufmann, San Mateo (1989)

    Google Scholar 

  19. Arasaratnam, I., Haykin, S.: Nonlinear Bayesian Filters for Training Recurrent Neural Networks. In: Gelbukh, A., Morales, E.F. (eds.) MICAI 2008. LNCS (LNAI), vol. 5317, pp. 12–33. Springer, Heidelberg (2008)

    Google Scholar 

  20. Haykin, S.: Kalman Filtering and Neural Networks, 304 p. John Wiley & Sons (2001)

    Google Scholar 

  21. Puskorius, G.V., Feldkamp, L.A.: Decoupled Extended Kalman Filter Training of Feedforward Layered Networks. In: International Joint Conference on Neural Networks, Seattle, July 8-14, vol. 1, pp. 771–777 (1991)

    Google Scholar 

  22. Li, S.: Comparative Analysis of Backpropagation and Extended Kalman Filter in Pattern and Batch Forms for Training Neural Networks. In: Proceedings on International Joint Conference on Neural Networks (IJCNN 2001), Washington, DC, July 15-19, vol. 1, pp. 144–149 (2001)

    Google Scholar 

  23. Chernodub, A.: Direct Method for Training Feed-Forward Neural Networks Using Batch Extended Kalman Filter for Multi-Step-Ahead Predictions. In: Mladenov, V., Koprinkova-Hristova, P., Palm, G., Villa, A.E.P., Appollini, B., Kasabov, N. (eds.) ICANN 2013. LNCS, vol. 8131, pp. 138–145. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  24. Mirikitani, D.T., Nikolaev, N.: Dynamic Modeling with Ensemble Kalman Filter Trained Recurrent Neural Networks. In: Seventh International Conference on Machine Learning and Applications (ICMLA 2008), San Diego, USA, December 11-13 (2008)

    Google Scholar 

  25. Wan, E.A., van der Merwe, R.: The Unscented Kalman Filter for Nonlinear Estima-tion. In: Proceedings of IEEE Symposium (AS-SPCC), Lake Louise, Alberta, Canada, pp. 153–158 (October 2000)

    Google Scholar 

  26. Arasaratnam, I., Haykin, S.: Cubature Kalman Filters. IEEE Transactions on Automatic Control 56(6), 1254–1269

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Chernodub, A. (2015). Training Dynamic Neural Networks Using the Extended Kalman Filter for Multi-Step-Ahead Predictions. In: Koprinkova-Hristova, P., Mladenov, V., Kasabov, N. (eds) Artificial Neural Networks. Springer Series in Bio-/Neuroinformatics, vol 4. Springer, Cham. https://doi.org/10.1007/978-3-319-09903-3_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-09903-3_11

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-09902-6

  • Online ISBN: 978-3-319-09903-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics