Uncertainty Estimation via Stochastic Batch Normalization

  • Andrei Atanov
  • Arsenii Ashukha
  • Dmitry Molchanov
  • Kirill NeklyudovEmail author
  • Dmitry Vetrov
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11554)


In this work, we investigate Batch Normalization technique and propose its probabilistic interpretation. We propose a probabilistic model and show that Batch Normalization maximizes the lower bound of its marginal log-likelihood. Then, according to the new probabilistic model, we design an algorithm which acts consistently during train and test. However, inference becomes computationally inefficient. To reduce memory and computational cost, we propose Stochastic Batch Normalization – an efficient approximation of proper inference procedure. This method provides us with a scalable uncertainty estimation technique. We demonstrate the performance of Stochastic Batch Normalization on popular architectures (including deep convolutional architectures: VGG-like and ResNets) for MNIST and CIFAR-10 datasets.


Uncertainty estimation Deep Learning Batch Normalization 



This research is in part based on the work supported by Samsung Research, Samsung Electronics.


  1. 1.
    Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. arXiv:1506.02142 (2015)
  2. 2.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR abs/1512.03385 (2015)Google Scholar
  3. 3.
    Hoffman, M.D., Blei, D.M., Wang, C., Paisley, J.: Stochastic variational inference. J. Mach. Learn. Res. 14, 1303–1347 (2013)Google Scholar
  4. 4.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. CoRR abs/1502.03167 (2015)Google Scholar
  5. 5.
    Kingma, D.P., Salimans, T., Welling, M.: Variational dropout and the local reparameterization trick. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems 28, pp. 2575–2583. Curran Associates, Inc. (2015)Google Scholar
  6. 6.
    Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems 30, pp. 6405–6416. Curran Associates, Inc. (2017)Google Scholar
  7. 7.
    Louizos, C., Welling, M.: Multiplicative normalizing flows for variational Bayesian neural networks. In: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6–11 August 2017, pp. 2218–2227 (2017)Google Scholar
  8. 8.
    MacKay, D.J.C.: A practical Bayesian framework for backpropagation networks. Neural Comput. 4(3), 448–472 (1992). Scholar
  9. 9.
    Molchanov, D., Ashukha, A., Vetrov, D.: Variational dropout sparsifies deep neural networks. arXiv preprint arXiv:1701.05369 (2017)
  10. 10.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)Google Scholar
  11. 11.
    Welling, M., Teh, Y.W.: Bayesian learning via stochastic gradient Langevin dynamics. In: Getoor, L., Scheffer, T. (eds.) ICML, pp. 681–688. Omnipress (2011)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Andrei Atanov
    • 1
  • Arsenii Ashukha
    • 2
  • Dmitry Molchanov
    • 1
    • 2
  • Kirill Neklyudov
    • 1
    • 2
    Email author
  • Dmitry Vetrov
    • 1
    • 2
  1. 1.National Research University Higher School of Economics, Samsung-HSE LaboratoryMoscowRussia
  2. 2.Samsung AI Center in MoscowMoscowRussia

Personalised recommendations