Advertisement

Comparison of Deep Neural Networks and Deep Hierarchical Models for Spatio-Temporal Data

  • Christopher K. WikleEmail author
Article

Abstract

Spatio-temporal data are ubiquitous in the agricultural, ecological, and environmental sciences, and their study is important for understanding and predicting a wide variety of processes. One of the difficulties with modeling spatial processes that change in time is the complexity of the dependence structures that must describe how such a process varies, and the presence of high-dimensional complex datasets and large prediction domains. It is particularly challenging to specify parameterizations for nonlinear dynamic spatio-temporal models (DSTMs) that are simultaneously useful scientifically and efficient computationally. Statisticians have developed multi-level (deep) hierarchical models that can accommodate process complexity as well as the uncertainties in the predictions and inference. However, these models can be expensive and are typically application specific. On the other hand, the machine learning community has developed alternative “deep learning” approaches for nonlinear spatio-temporal modeling. These models are flexible yet are typically not implemented in a probabilistic framework. The two paradigms have many things in common and suggest hybrid approaches that can benefit from elements of each framework. This overview paper presents a brief introduction to the multi-level (deep) hierarchical DSTM (H-DSTM) framework, and deep models in machine learning, culminating with the deep neural DSTM (DN-DSTM). Recent approaches that combine elements from H-DSTMs and echo state network DN-DSTMs are presented as illustrations. Supplementary materials accompanying this paper appear online.

Keywords

Bayesian Convolutional neural network CNN Dynamic model Echo state network ESN Recurrent neural network RNN 

Notes

Acknowledgements

This work was partially supported by the US National Science Foundation (NSF) and the US Census Bureau under NSF Grant SES-1132031, funded through the NSF-Census Research Network (NCRN) program, and NSF Award DMS-1811745. The author would like to thank Brian Reich for encouraging the writing of this paper, Patrick McDermott for helpful discussions, Nathan Wikle for providing helpful comments on an early draft, and Jennifer Hoeting for encouraging and helpful review comments.

References

  1. Aggarwal, C. C. (2018), Neural networks and deep learning, Springer, Berlin.CrossRefzbMATHGoogle Scholar
  2. Antonelo, E. A., Camponogara, E., and Foss, B. (2017), “Echo State Networks for data-driven downhole pressure estimation in gas-lift oil wells,” Neural Networks, 85, 106–117.CrossRefGoogle Scholar
  3. Berliner, L. M. (1996), “Hierarchical Bayesian time series models,” in Maximum Entropy and Bayesian Methods, eds. Hanson, K. M. and Silver, R. N., Dordecht: Kluwer, Fundamental Theories of Physics, 79, pp. 15–22.Google Scholar
  4. Bingham, E. and Mannila, H. (2001), “Random projection in dimensionality reduction: applications to image and text data,” in Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, pp. 245–250.Google Scholar
  5. Bronstein, M., Bruna Estrach, J., LeCun, Y., Szlam, A., and Vandergheynst, P. (2017), “Geometric Deep Learning: Going beyond Euclidean data,” IEEE Signal Processing Magazine, 34, 18–42.CrossRefGoogle Scholar
  6. Chatzis, S. P. (2015), “Sparse Bayesian Recurrent Neural Networks,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Springer, pp. 359–372.Google Scholar
  7. Chien, J.-T. and Ku, Y.-C. (2016), “Bayesian recurrent neural network for language modeling,” IEEE transactions on neural networks and learning systems, 27, 361–374.MathSciNetCrossRefGoogle Scholar
  8. Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014), “Learning phrase representations using RNN encoder-decoder for statistical machine translation,” arXiv preprint arXiv:1406.1078.
  9. Cressie, N. and Wikle, C. K. (2011), Statistics for Spatio-Temporal Data, Wiley, Hoboken.zbMATHGoogle Scholar
  10. Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., and Darrell, T. (2015), “Long-term recurrent convolutional networks for visual recognition and description,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2625–2634.Google Scholar
  11. Erhan, D., Bengio, Y., Courville, A., Manzagol, P.-A., Vincent, P., and Bengio, S. (2010), “Why does unsupervised pre-training help deep learning?” Journal of Machine Learning Research, 11, 625–660.MathSciNetzbMATHGoogle Scholar
  12. Fan, J. and Lv, J. (2010), “A selective overview of variable selection in high dimensional feature space,” Statistica Sinica, 20, 101.MathSciNetzbMATHGoogle Scholar
  13. Gallicchio, C. and Micheli, A. (2011), “Architectural and markovian factors of echo state networks,” Neural Networks, 24, 440–456.CrossRefGoogle Scholar
  14. Gallicchio, C., Micheli, A., and Pedrelli, L. (2018), “Design of deep echo state networks,” Neural Networks, 108, 33–47.CrossRefGoogle Scholar
  15. Gan, Z., Li, C., Chen, C., Pu, Y., Su, Q., and Carin, L. (2016), “Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling,” arXiv preprint arXiv:1611.08034.
  16. Gelman, A. and Hill, J. (2006), Data analysis using regression and multilevel/hierarchical models, Cambridge University Press, Cambridge.CrossRefGoogle Scholar
  17. Gelman, A., Stern, H. S., Carlin, J. B., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2013), Bayesian data analysis, third edition, Chapman and Hall/CRC, London.zbMATHGoogle Scholar
  18. Goodfellow, I., Bengio, Y., Courville, A., and Bengio, Y. (2016), Deep learning, vol. 1, MIT Press, Cambridge.zbMATHGoogle Scholar
  19. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014), “Generative adversarial nets,” in Advances in neural information processing systems, pp. 2672–2680.Google Scholar
  20. Graves, A., Mohamed, A.-r., and Hinton, G. (2013), “Speech recognition with deep recurrent neural networks,” in 2013 ieee international conference on acoustics, speech and signal processing (icassp), IEEE, pp. 6645–6649.Google Scholar
  21. Heaton, M. J., Datta, A., Finley, A. O., Furrer, R., Guinness, J., Guhaniyogi, R., Gerber, F., Gramacy, R. B., Hammerling, D., Katzfuss, M., et al. (2018), “A case study competition among methods for analyzing large spatial data,” Journal of Agricultural, Biological and Environmental Statistics, 1–28.Google Scholar
  22. Hermans, M. and Schrauwen, B. (2013), “Training and analysing deep recurrent neural networks,” in Advances in neural information processing systems, pp. 190–198.Google Scholar
  23. Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A.-r., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., et al. (2012), “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal processing magazine, 29, 82–97.CrossRefGoogle Scholar
  24. Hochreiter, S. and Schmidhuber, J. (1997), “Long short-term memory,” Neural computation, 9, 1735–1780.CrossRefGoogle Scholar
  25. Jaeger, H. (2007), “Discovering multiscale dynamical features with hierarchical echo state networks,” Tech. rep., Jacobs University Bremen.Google Scholar
  26. Karpatne, A., Atluri, G., Faghmous, J. H., Steinbach, M., Banerjee, A., Ganguly, A., Shekhar, S., Samatova, N., and Kumar, V. (2017), “Theory-guided data science: A new paradigm for scientific discovery from data,” IEEE Transactions on Knowledge and Data Engineering, 29, 2318–2331.CrossRefGoogle Scholar
  27. Keren, G. and Schuller, B. (2016), “Convolutional RNN: an enhanced model for extracting features from sequential data,” in Neural Networks (IJCNN), 2016 International Joint Conference on, IEEE, pp. 3412–3419.Google Scholar
  28. Leeds, W. B., Wikle, C. K., and Fiechter, J. (2014), “Emulator-assisted reduced-rank ecological data assimilation for nonlinear multivariate dynamical spatio-temporal processes,” Statistical Methodology, 17, 126–138.MathSciNetCrossRefzbMATHGoogle Scholar
  29. Lukoševičius, M. and Jaeger, H. (2009), “Reservoir computing approaches to recurrent neural network training,” Computer Science Review, 3, 127–149.CrossRefzbMATHGoogle Scholar
  30. Ma, Q., Shen, L., and Cottrell, G. W. (2017), “Deep-ESN: A Multiple Projection-encoding Hierarchical Reservoir Computing Framework,” arXiv preprint arXiv:1711.05255.
  31. MacKay, D. J. (1992), “A practical Bayesian framework for backpropagation networks,” Neural computation, 4, 448–472.CrossRefGoogle Scholar
  32. McDermott, P. L. and Wikle, C. K. (2017a), “Bayesian Recurrent Neural Network Models for Forecasting and Quantifying Uncertainty in Spatial-Temporal Data,” arXiv preprint arXiv:1711.00636.
  33. McDermott, P. L. and Wikle, C. K. (2017b), “An Ensemble Quadratic Echo State Network for Nonlinear Spatio-Temporal Forecasting,” STAT, 6, 315–330.MathSciNetCrossRefGoogle Scholar
  34. McDermott, P. L. and Wikle, C. K. (2018), “Deep echo state networks with uncertainty quantification for spatio-temporal forecasting,” Environmetrics, e2553.Google Scholar
  35. Neal, R. M. (1996), Bayesian learning for neural networks, New York, NY: Springer.CrossRefzbMATHGoogle Scholar
  36. Polson, N. G., Sokolov, V., et al. (2017), “Deep learning: A bayesian perspective,” Bayesian Analysis, 12, 1275–1304.MathSciNetCrossRefzbMATHGoogle Scholar
  37. Polson, N. G. and Sokolov, V. O. (2017), “Deep learning for short-term traffic flow prediction,” Transportation Research Part C: Emerging Technologies, 79, 1–17.CrossRefGoogle Scholar
  38. Quiroz, M., Nott, D. J., and Kohn, R. (2018), “Gaussian variational approximation for high-dimensional state space models,” arXiv preprint arXiv:1801.07873.
  39. Rasmussen, C. E. and Williams, C. K. (2006), Gaussian processes for machine learning, Cambridge, MA: MIT press.zbMATHGoogle Scholar
  40. Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., et al. (2019), “Deep learning and process understanding for data-driven Earth system science,” Nature, 566, 195.CrossRefGoogle Scholar
  41. Shalev-Shwartz, S., Shamir, O., and Shammah, S. (2017), “Failures of deep learning,” arXiv preprint arXiv:1703.07950.
  42. Sheng, C., Zhao, J., Wang, W., and Leung, H. (2013), “Prediction intervals for a noisy nonlinear time series based on a bootstrapping reservoir computing network ensemble,” IEEE Transactions on neural networks and learning systems, 24, 1036–1048.CrossRefGoogle Scholar
  43. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016), “Mastering the game of Go with deep neural networks and tree search,” nature, 529, 484.Google Scholar
  44. Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., et al. (2018), “A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play,” Science, 362, 1140–1144.MathSciNetCrossRefGoogle Scholar
  45. Snoek, J., Rippel, O., Swersky, K., Kiros, R., Satish, N., Sundaram, N., Patwary, M., Prabhat, M., and Adams, R. (2015), “Scalable bayesian optimization using deep neural networks,” in International Conference on Machine Learning, pp. 2171–2180.Google Scholar
  46. Takens, F. (1981), “Detecting strange attractors in turbulence,” Lecture notes in mathematics, 898, 366–381.Google Scholar
  47. Tobler, W. R. (1970), “A computer movie simulating urban growth in the Detroit region,” Economic geography, 46, 234–240.CrossRefGoogle Scholar
  48. Tong, Z. and Tanaka, G. (2018), “Reservoir Computing with Untrained Convolutional Neural Networks for Image Recognition,” in 2018 24th International Conference on Pattern Recognition (ICPR), IEEE, pp. 1289–1294.Google Scholar
  49. Tran, M.-N., Nguyen, N., Nott, D., and Kohn, R. (2018), “Bayesian Deep Net GLM and GLMM,” arXiv preprint arXiv:1805.10157.
  50. Triefenbach, F., Jalalvand, A., Demuynck, K., and Martens, J. (2013), “Acoustic modeling with hierarchical reservoirs,” IEEE Transactions on Audio, Speech, and Language Processing, 21, 2439–2450.CrossRefGoogle Scholar
  51. Wikle, C., Zammit-Mangion, A., and Cressie, N. (2019), Spatio-Temporal Statistics with R, Boca Raton, FL: Chapman and Hall/CRC.CrossRefGoogle Scholar
  52. Wikle, C. K., Berliner, L. M., and Cressie, N. (1998), “Hierarchical Bayesian space-time models,” Environmental and Ecological Statistics, 5, 117–154.CrossRefGoogle Scholar
  53. Wikle, C. K. and Hooten, M. B. (2010), “A general science-based framework for dynamical spatio-temporal models,” Test, 19, 417–451.MathSciNetCrossRefzbMATHGoogle Scholar
  54. Wikle, C. K., Milliff, R. F., Nychka, D., and Berliner, L. M. (2001), “Spatiotemporal hierarchical Bayesian modeling tropical ocean surface winds,” Journal of the American Statistical Association, 96, 382–397.MathSciNetCrossRefzbMATHGoogle Scholar
  55. Xingjian, S., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., and Woo, W.-c. (2015), “Convolutional LSTM network: A machine learning approach for precipitation nowcasting,” in Advances in neural information processing systems, pp. 802–810.Google Scholar

Copyright information

© International Biometric Society 2019

Authors and Affiliations

  1. 1.Department of StatisticsUniversity of MissouriColumbiaUSA

Personalised recommendations