Advertisement

Uncertainty Measurements for the Reliable Classification of Mammograms

  • Mickael TardyEmail author
  • Bruno Scheffer
  • Diana Mateus
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

We propose an efficient approach to estimate the uncertainty of deep-neural network classifiers based on the tradeoff of two measurements. The first based on subjective logic and the evidence of soft-max predictions and the second, based the Mahalanobis distance between new and training samples in the embedding space. These measurements require neither modifying, nor retraining, nor multiple testing of the models. We evaluate our methods on different classification tasks including breast cancer risk, breast density, and patch-wise tissue type and considering both an in-house database of 1600 mammographies, as well as on the public INBreast dataset. Throughout the experiments, we show the ability of our method to reject the most evident outliers, and to offer AUC gains of up to 10%, when keeping 60% of most certain samples.

Keywords

Uncertainty Classification Deep learning Mammography Breast cancer 

Supplementary material

490281_1_En_55_MOESM1_ESM.pdf (4.3 mb)
Supplementary material 1 (pdf 4373 KB)

References

  1. 1.
    Bragman, F.J.S., et al.: Uncertainty in multitask learning: joint representations for probabilistic MR-only radiotherapy planning. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 3–11. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00937-3_1CrossRefGoogle Scholar
  2. 2.
    Denouden, T., Salay, R., Czarnecki, K., Abdelzad, V., Phan, B., Vernekar, S.: Improving reconstruction autoencoder out-of-distribution detection with mahalanobis distance. CoRR abs/1812.02765 (2018)Google Scholar
  3. 3.
    Eaton-Rosen, Z., Bragman, F., Bisdas, S., Ourselin, S., Cardoso, M.J.: Towards safe deep learning: accurately quantifying biomarker uncertainty in neural network predictions. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 691–699. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00928-1_78CrossRefGoogle Scholar
  4. 4.
    Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: International Conference on Machine Learning, pp. 1050–1059 (2016)Google Scholar
  5. 5.
    Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. CoRR abs/1610.02136 (2016)Google Scholar
  6. 6.
    Jøsang, A.: Subjective Logic: A Formalism for Reasoning Under Uncertainty. Springer (2018, Incorporated)Google Scholar
  7. 7.
    Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems 30, pp. 5574–5584. Curran Associates, Inc. (2017)Google Scholar
  8. 8.
    Lee, K., Lee, K., Lee, H., Shin, J.: A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31, pp. 7167–7177. Curran Associates, Inc. (2018)Google Scholar
  9. 9.
    Leibig, C., Allken, V., Ayhan, M.S., Berens, P., Wahl, S.: Leveraging uncertainty information from deep neural networks for disease detection. Sci. Rep. 7(1), 17816 (2017)CrossRefGoogle Scholar
  10. 10.
    Liang, S., Li, Y., Srikant, R.: Principled detection of out-of-distribution examples in neural networks. CoRR abs/1706.02690 (2017)Google Scholar
  11. 11.
    Malinin, A., Gales, M.: Predictive uncertainty estimation via prior networks. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31, pp. 7047–7058. Curran Associates, Inc. (2018)Google Scholar
  12. 12.
    Moreira, I.C., Amaral, I., Domingues, I., Cardoso, A., Cardoso, M.J., Cardoso, J.S.: INbreast: toward a full-field digital mammographic database. Acad. Radiol. 19(2), 236–248 (2012)CrossRefGoogle Scholar
  13. 13.
    Nair, T., Precup, D., Arnold, D.L., Arbel, T.: Exploring uncertainty measures in deep networks for multiple sclerosis lesion detection and segmentation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) Medical Image Computing and Computer Assisted Intervention - MICCAI 2018, pp. 655–663. Springer International Publishing, Cham (2018).  https://doi.org/10.1007/978-3-030-00928-1_74CrossRefGoogle Scholar
  14. 14.
    Ribli, D., Horváth, A., Unger, Z., Pollner, P., Csabai, I.: Detecting and classifying lesions in mammograms with deep learning. Sci. Rep. 8(1), 4165 (2018)CrossRefGoogle Scholar
  15. 15.
    Sensoy, M., Kaplan, L., Kandemir, M.: Evidential deep learning to quantify classification uncertainty. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31, pp. 3183–3193. Curran Associates, Inc. (2018)Google Scholar
  16. 16.
    Shen, L.: End-to-end training for whole image breast cancer diagnosis using an all convolutional design. arXiv preprint arXiv:1708.09427, November 2017

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Ecole Centrale de Nantes, LS2N, UMR CNRS 6004NantesFrance
  2. 2.Hera-MI, SASNantesFrance
  3. 3.Institut de cancérologie de l’OuestNantesFrance

Personalised recommendations