Multi-step Time Series Forecasting Using Ridge Polynomial Neural Network with Error-Output Feedbacks
Time series forecasting gets much attention due to its impact on many practical applications. Higher-order neural network with recurrent feedback is a powerful technique which used successfully for forecasting. It maintains fast learning and the ability to learn the dynamics of the series over time. For that, in this paper, we propose a novel model, called Ridge Polynomial Neural Network with Error-Output Feedbacks (RPNN-EOF), which combines three powerful properties: higher order terms, output feedback and error feedback. The well-known Mackey–Glass time series is used to evaluate the forecasting capability of RPNN-EOF. Results show that the proposed RPNN-EOF provides better understanding for the Mackey–Glass time series with root mean square error equal to 0.00416. This error is smaller than other models in the literature. Therefore, we can conclude that the RPNN-EOF can be applied successfully for time series forecasting. Furthermore, the error-output feedbacks can be investigated and applied with different neural network models.
KeywordsTime series forecasting Ridge polynomial neural network with error-output feedbacks Higher order neural networks Recurrent neural networks Mackey–glass equation
The authors would like to thank Universiti Tun Hussein Onn Malaysia (UTHM) and Ministry of Higher Education (MOHE) Malaysia for financially supporting this research under the Fundamental Research Grant Scheme (FRGS), Vote No. 1235.
- 1.Al-Jumeily, D., Ghazali, R., Hussain, A.: Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks. PLOS ONE (2014)Google Scholar
- 2.Haykin, S.S.: Neural Networks and Learning Machines. Prentice Hall, New Jersey (2009)Google Scholar
- 7.Waheeb, W., Ghazali, R., Herawan, T.: Time series forecasting using ridge polynomial neural network with error feedback. In: Proceedings of the Second International Conference on Soft Computing and Data Mining (SCDM-2016) (in press)Google Scholar
- 10.Dhahri, H., Alimi, A.: Automatic selection for the beta basis function neural networks. In: Krasnogor, N., Nicosia, G., Pavone, M., Pelta, D. (eds.) Nature Inspired Cooperative Strategies for Optimization (NICSO 2007). Studies in Computational Intelligence, vol. 129, pp. 461–474. Springer, Heidelberg (2008)CrossRefGoogle Scholar
- 14.Tan, J.Y., Bong, D.B., Rigit, A.R.: Time series prediction using backpropagation network optimized by hybrid K-means-greedy algorithm. Eng. Lett. 20(3), 203–210 (2012)Google Scholar
- 15.Dhahri, H., Alimi, A.M.: The modified differential evolution and the RBF (MDE-RBF) neural network for time series prediction. In: The 2006 IEEE International Joint Conference on Neural Network Proceedings, pp. 2938–2943. IEEE (2006)Google Scholar
- 17.Lin, C.J.: Wavelet neural networks with a hybrid learning approach. J. Inf. Sci. Eng. 22(6), 1367–1387 (2006)Google Scholar
- 19.Shin, Y., Ghosh, J.: The Pi-sigma network: an efficient higher-order neural network for pattern classification and function approximation. In: IJCNN-91-Seattle International Joint Conference Neural Networks, vol. 1, pp. 13–18. IEEE (1991)Google Scholar
- 22.Mahmud, M.S., Meesad, P.: An innovative recurrent error-based neuro-fuzzy system with momentum for stock price prediction. Soft. Comput. 1–19 (2015)Google Scholar