Abstract
This paper is dedicated to single-step-ahead and multi-step-ahead time series prediction problems. We consider feedforward and recurrent neural network architectures, different derivatives calculation and optimization methods and analyze their advantages and disadvantages. We propose a novel method for training feedforward neural networks with tapped delay lines for better multi-step-ahead predictions. Special mini-batch calculations of derivatives called Forecasted Propagation Through Time for the Extended Kalman Filter training method are introduced. Experiments on well-known benchmark time series are presented.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Haykin, S.: Neural Networks and Learning Machines, 3rd edn., 936 p. Prentice Hall, New York (2009)
Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn., p. 842. Prentice Hall, Englewood Cliffs (1999)
Bishop, C.M.: Pattern Recognition and Machine Learning, 738 p. Springer (2006)
Giles, L.L., Horne, B.G., Sun-Yan, Y.: A Delay Damage Model Selection Algorithm for NARX Neural Networks. IEEE Transactions on Signal Processing 45(11), 2719–2730 (1997)
Parlos, A.G., Raisa, O.T., Atiya, A.F.: Multi-step-ahead prediction using dynamic recurrent neural networks. Neural Networks 13(7), 765–786 (2000)
Bone, R., Cardot, H.: Advanced Methods for Time Series Prediction Using Recurrent Neural Networks. In: Recurrent Neural Networks for Temporal Data Processing, ch. 2, pp. 15–36. Intech, Croatia (2011)
Qina, S.J., Badgwellb, T.A.: A survey of industrial model predictive control technology. Control Engineering Practice 11(7), 733–764 (2003)
Toth, E., Brath, A.: Multistep ahead streamflow forecasting: Role of calibration data in conceptual and neural network modeling. Water Resources Research 43(11) (2007), doi:10.1029/2006WR005383
Prokhorov, D.V.: Toyota Prius HEV Neurocontrol and Diagnostics. Neural Networks (21), 458–465 (2008)
Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Technical ReportGMDReport 148, German National Research Center for Information Technology (2001)
Anderson, J.A., Rosenfeld, E. (eds.): Talking nets: An oral history of neural networks, p. 54. MIT Press, Cambridge (1998)
Hava, T., Siegelmann, B.G., Horne, C.: Computational capabilities of recurrent NARX neural networks. IEEE Transactions on Systems, Man, and Cybernetics, Part B 27(2), 208–215 (1997)
Newton, H.J., North, G.R.: Forecasting global ice volume. J. Time Series Analysis 1991(12), 255–265 (1991)
Hyndman, R.J.: Time Series Data Library, http://data.is/TSDLdemo
Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient scent is difficult. IEEE Trans. Neural Networks 5(2), 157–166 (1994)
Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J.: Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In: A Field Guide to Dynamical Recurrent Neural Networks, 421 p. IEEE Press (2001)
Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T., Kingsbury, B.: Deep Neural Networks for Acoustic Modeling in Speech Recognition. IEEE Signal Processing Magazine 29(6), 82–97 (2012)
Singhal, S., Wu, L.: Training Multilayer Perceptrons with the Extended Kalman algorithm. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems 1, pp. 133–140. Morgan Kaufmann, San Mateo (1989)
Arasaratnam, I., Haykin, S.: Nonlinear Bayesian Filters for Training Recurrent Neural Networks. In: Gelbukh, A., Morales, E.F. (eds.) MICAI 2008. LNCS (LNAI), vol. 5317, pp. 12–33. Springer, Heidelberg (2008)
Haykin, S.: Kalman Filtering and Neural Networks, 304 p. John Wiley & Sons (2001)
Puskorius, G.V., Feldkamp, L.A.: Decoupled Extended Kalman Filter Training of Feedforward Layered Networks. In: International Joint Conference on Neural Networks, Seattle, July 8-14, vol. 1, pp. 771–777 (1991)
Li, S.: Comparative Analysis of Backpropagation and Extended Kalman Filter in Pattern and Batch Forms for Training Neural Networks. In: Proceedings on International Joint Conference on Neural Networks (IJCNN 2001), Washington, DC, July 15-19, vol. 1, pp. 144–149 (2001)
Chernodub, A.: Direct Method for Training Feed-Forward Neural Networks Using Batch Extended Kalman Filter for Multi-Step-Ahead Predictions. In: Mladenov, V., Koprinkova-Hristova, P., Palm, G., Villa, A.E.P., Appollini, B., Kasabov, N. (eds.) ICANN 2013. LNCS, vol. 8131, pp. 138–145. Springer, Heidelberg (2013)
Mirikitani, D.T., Nikolaev, N.: Dynamic Modeling with Ensemble Kalman Filter Trained Recurrent Neural Networks. In: Seventh International Conference on Machine Learning and Applications (ICMLA 2008), San Diego, USA, December 11-13 (2008)
Wan, E.A., van der Merwe, R.: The Unscented Kalman Filter for Nonlinear Estima-tion. In: Proceedings of IEEE Symposium (AS-SPCC), Lake Louise, Alberta, Canada, pp. 153–158 (October 2000)
Arasaratnam, I., Haykin, S.: Cubature Kalman Filters. IEEE Transactions on Automatic Control 56(6), 1254–1269
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Chernodub, A. (2015). Training Dynamic Neural Networks Using the Extended Kalman Filter for Multi-Step-Ahead Predictions. In: Koprinkova-Hristova, P., Mladenov, V., Kasabov, N. (eds) Artificial Neural Networks. Springer Series in Bio-/Neuroinformatics, vol 4. Springer, Cham. https://doi.org/10.1007/978-3-319-09903-3_11
Download citation
DOI: https://doi.org/10.1007/978-3-319-09903-3_11
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-09902-6
Online ISBN: 978-3-319-09903-3
eBook Packages: EngineeringEngineering (R0)