Advertisement

Learning Hierarchical Weather Data Representation for Short-Term Weather Forecasting Using Autoencoder and Long Short-Term Memory Models

  • Yaya HeryadiEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11431)

Abstract

Weather prediction task remains a challenging problem in computer vision field although some solutions have been used for many applications such as air/sea transportation. The increasing requirement toward safer human transportation that requires a robust weather prediction model has motivated the development of a vast number of weather prediction models. In the past decade, the advent of deep learning methods has opened up a new approach to weather prediction, mainly in two areas: automated learning hierarchical representation of weather data and robust weather prediction models. This paper presents a method for automatic feature extraction from weather time series data using Autoencoder model. The learned weather representation was used to train Long Short-term Memory model as a prediction model or regressor. Although it can be used to predict many other weather variables, in this study, the proposed model was tested to predict temperature, dew point, and humidity. The results show that the model performance measured by training and testing RMSE values are as follows. Predicting temperature: AE90-LSTM model (0.00003, 0.00010) and predicting dew point: AE199-LSTM model (0.00005, 0.00010). Interestingly, for predicting humidity 100LSTM model (0.00004, 0.00001) and AE100-LSTM model (0.00001, 0.00008) achieved almost similar performance.

Keywords

Autoencoder Long Short-term memory Weather prediction 

References

  1. 1.
    Wikipedia contributors: Weather forecasting. In: Wikipedia, The Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Weather_forecasting&oldid=868621761. Accessed 8 Nov 2018
  2. 2.
    Santhanam, M.S., Patra, P.K.: Statistics of atmospheric correlations. Phys. Rev. E 64, 016102 (2001)CrossRefGoogle Scholar
  3. 3.
    Kalnay, E.: Atmospheric Modeling, Data Assimilation and Predictability. Cambridge University Press, Cambridge (2003)Google Scholar
  4. 4.
    Dirmeyer, P.A., Schlosser, C.A., Brubaker, K.L.: Precipitation, recycling, and land memory: an integrated analysis. J. Hydrometeorol. 10(1), 278–288 (2009)CrossRefGoogle Scholar
  5. 5.
    Holmstrom, H., Liu, D., Vo, C.: Machine learning applied to weather forecasting. http://cs229.stanford.edu/proj2016/report/HolmstromLiuVo-MachineLearningAppliedToWeatherForecasting-report.pdf. Accessed 1 Dec 2018
  6. 6.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)CrossRefGoogle Scholar
  7. 7.
    Fente, D.N., Singh, D.K.: Weather forecasting using artificial neural network. In: IEEE 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT), pp. 1757–1761 (2018)Google Scholar
  8. 8.
    Qureshi, A.S., Khan, A., Zameer, A., Usman, A.: Wind power prediction using deep neural network based meta regression and transfer learning. Appl. Soft Comput. 58, 742–755 (2017)CrossRefGoogle Scholar
  9. 9.
    Hu, Q., Zhang, R., Zhou, Y.: Transfer learning for short-term wind speed prediction with deep neural networks. Renew. Energy 85, 83–95 (2016)CrossRefGoogle Scholar
  10. 10.
    Samuel, A.L.: Some studies in machine learning using the game of checkers. IBM J. Res. Dev. 3(3), 210–229 (1959)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Mitchell, T.M.: Machine Learning. McGraw-Hill, Boston (1997)zbMATHGoogle Scholar
  12. 12.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  13. 13.
    Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29(6), 82–97 (2012)CrossRefGoogle Scholar
  14. 14.
    Graves, A., Mohamed, A.R., Hinton, G.: Speech recognition with deep recurrent neural networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6645–6649 (2013)Google Scholar
  15. 15.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep learning, vol. 1. MIT press, Cambridge (2016)zbMATHGoogle Scholar
  16. 16.
    Becker, S.: Unsupervised learning procedures for neural networks. Int. J. Neural Syst. 1 & 2, 17–33 (1991)CrossRefGoogle Scholar
  17. 17.
    Kang, Y., Lee, K.-T., Eun, J., Park, S.E., Choi, S.: Stacked denoising autoencoders for face pose normalization. In: Lee, M., Hirose, A., Hou, Z.-G., Kil, R.M. (eds.) ICONIP 2013. LNCS, vol. 8228, pp. 241–248. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-42051-1_31CrossRefGoogle Scholar
  18. 18.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)CrossRefGoogle Scholar
  19. 19.
    Robinson, A.J., Fallside, F.: The utility driven dynamic error propagation network. Technical report CUED/F-INFENG/TR.1, Cambridge University Engineering Department (1987)Google Scholar
  20. 20.
    Werbos, P.J.: Generalization of backpropagation with application to a recurrent gas market model. Neural Netw. 1, 339–356 (1988)CrossRefGoogle Scholar
  21. 21.
    Williams, R.J., Zipser, D.: Gradient-based learning algorithms for recurrent networks and their computational complexity. In: Back-Propagation: Theory, Architectures and Applications. Erlbaum, Hillsdale (1992)Google Scholar
  22. 22.
    Salman, A.G., Heryadi, Y., Abdurahman, E., Suparta, W.: Weather forecasting using merged long short-term memory model. Bull. Electr. Eng. Inf. 7(3), 377–385 (2018)Google Scholar
  23. 23.
    Graves, A., Jaitly, N.: Towards end-to-end speech recognition with recurrent neural networks. In: JMLR Workshop and Conference Proceedings, vol. 32, no. 1, pp. 1764–1772 (2014)Google Scholar
  24. 24.
    Geiger, J.T., Zhang, Z., Weninger, F., Schuller, B., Rigoll, G.: Robust speech recognition using long short-term memory recurrent neural networks for hybrid acoustic modelling. In: Fifteenth annual conference of the international speech communication association (2014)Google Scholar
  25. 25.
    Basak, J., Sudarshan, A., Trivedi, D., Santhanam, M.S.: Weather data mining using independent component analysis. J. Mach. Learn. Res. 5(Mar), 239–253 (2004)MathSciNetGoogle Scholar
  26. 26.
    Kingma, D.P., Ba, J.L.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations, pp. 1–13 (2015)Google Scholar
  27. 27.
    Hinton, G., Srivastava, N., Swersky, K.: Neural networks for machine learning. https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf. Accessed 18 Nov 2018

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Computer Science Department, BINUS Graduate Program – Doctor of Computer ScienceBina Nusantara UniversityJakartaIndonesia

Personalised recommendations