Skip to main content

Bayesian Neural Learning via Langevin Dynamics for Chaotic Time Series Prediction

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10638))

Included in the following conference series:

Abstract

Although neural networks have been very promising tools for chaotic time series prediction, they lack methodology for uncertainty quantification. Bayesian inference using Markov Chain Mont-Carlo (MCMC) algorithms have been popular for uncertainty quantification for linear and non-linear models. Langevin dynamics refer to a class of MCMC algorithms that incorporate gradients with Gaussian noise in parameter updates. In the case of neural networks, the parameter updates refer to the weights of the network. We apply Langevin dynamics in neural networks for chaotic time series prediction. The results show that the proposed method improves the MCMC random-walk algorithm for majority of the problems considered. In particular, it gave much better performance for the real-world problems that featured noise.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Notes

  1. 1.

    https://github.com/rohitash-chandra/LDMCMC_timeseries.

  2. 2.

    https://github.com/rohitash-chandra/MCMC_fnn_timeseries.

References

  1. Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Proceedings of COMPSTAT 2010, pp. 177–186. Springer, Heidelberg (2010). doi:10.1007/978-3-7908-2604-3_16

  2. Hinton, G.E., Van Camp, D.: Keeping the neural networks simple by minimizing the description length of the weights. In: Proceedings of 6th Annual Conference on Computational Learning Theory, pp. 5–13. ACM (1993)

    Google Scholar 

  3. Hippert, H.S., Taylor, J.W.: An evaluation of bayesian techniques for controlling model complexity and selecting inputs in a neural network for short-term load forecasting. Neural Netw. 23(3), 386–395 (2010)

    Article  Google Scholar 

  4. Jylänki, P., Nummenmaa, A., Vehtari, A.: Expectation propagation for neural networks with sparsity-promoting priors. J. Mach. Learn. Res. 15(1), 1849–1901 (2014)

    MathSciNet  MATH  Google Scholar 

  5. Kocadağlı, O., Aşıkgil, B.: Nonlinear time series forecasting with Bayesian neural networks. Expert Syst. Appl. 41(15), 6596–6610 (2014)

    Article  Google Scholar 

  6. Li, C., Chen, C., Carlson, D., Carin, L.: Preconditioned stochastic gradient Langevin dynamics for deep neural networks. In: 30th AAAI Conference on Artificial Intelligence, pp. 1788–1794 (2016)

    Google Scholar 

  7. Liang, F.: Bayesian neural networks for nonlinear time series forecasting. Stat. Comput. 15(1), 13–29 (2005)

    Article  MathSciNet  Google Scholar 

  8. Liang, F.: Annealing stochastic approximation Monte Carlo algorithm for neural network training. Mach. Learn. 68(3), 201–233 (2007)

    Article  Google Scholar 

  9. MacKay, D.J.: A practical Bayesian framework for backpropagation networks. Neural Comput. 4(3), 448–472 (1992)

    Article  Google Scholar 

  10. MacKay, D.J.: Hyperparameters: optimize, or integrate out? In: Heidbreder, G.R. (ed.) Maximum entropy and Bayesian Methods, pp. 43–59. Springer, Dordrecht (1996). doi:10.1007/978-94-015-8729-7_2

    Chapter  Google Scholar 

  11. Mirikitani, D.T., Nikolaev, N.: Recursive Bayesian recurrent neural networks for time-series modeling. IEEE Trans. Neural Netw. 21(2), 262–274 (2010)

    Article  Google Scholar 

  12. Neal, R.M.: Bayesian Learning for Neural Networks. Springer, New York (1996)

    Book  MATH  Google Scholar 

  13. Neal, R.M.: MCMC using Hamiltonian dynamics. In: Brooks, S., Gelman, A., Jones, G.L., Meng, X.-L. (eds.) Handbook of Markov Chain Monte Carlo, pp. 113–162. Chapman & Hall/CRC Press (2010)

    Google Scholar 

  14. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986). doi:10.1038/323533a0

    Article  MATH  Google Scholar 

  15. Takens, F.: Detecting strange attractors in turbulence. In: Rand, D.A., Young, L.S. (eds.) Dynamical Systems and Turbulence, Warwick 1980. LNM, pp. 366–381. Springer, Heidelberg (1981). doi:10.1007/BFb0091903

    Chapter  Google Scholar 

  16. Takens, F.: On the numerical determination of the dimension of an attractor. In: Braaksma, B.L.J., Broer, H.W., Takens, F. (eds.) Dynamical Systems and Bifurcations. LNM, vol. 1125, pp. 99–106. Springer, Heidelberg (1985). doi:10.1007/BFb0075637

    Chapter  Google Scholar 

  17. Welling, M., Teh, Y.W.: Bayesian learning via stochastic gradient Langevin dynamics. In: Proceedings of 28th International Conference on Machine Learning (ICML-2011), pp. 681–688 (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rohitash Chandra .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chandra, R., Azizi, L., Cripps, S. (2017). Bayesian Neural Learning via Langevin Dynamics for Chaotic Time Series Prediction. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10638. Springer, Cham. https://doi.org/10.1007/978-3-319-70139-4_57

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70139-4_57

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70138-7

  • Online ISBN: 978-3-319-70139-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics