Advertisement

Interpretable Deep Neural Network to Predict Estrogen Receptor Status from Haematoxylin-Eosin Images

  • Philipp Seegerer
  • Alexander BinderEmail author
  • René Saitenmacher
  • Michael Bockmayr
  • Maximilian Alber
  • Philipp Jurmeister
  • Frederick Klauschen
  • Klaus-Robert Müller
Chapter
  • 106 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12090)

Abstract

The eligibility for hormone therapy to treat breast cancer largely depends on the tumor’s estrogen receptor (ER) status. Recent studies show that the ER status correlates with morphological features found in Haematoxylin-Eosin (HE) slides. Thus, HE analysis might be sufficient for patients for whom the classifier confidently predicts the ER status and thereby obviate the need for additional examination, such as immunohistochemical (IHC) staining. Several prior works are limited by either the use of engineered features, multi-stage models that use features unspecific to HE images or a lack of explainability. To address these limitations, this work proposes an end-to-end neural network ensemble that shows state-of-the-art performance. We demonstrate that the approach also translates to the prediction of the cancer grade. Moreover, subsets can be selected from the test data for which the model can detect a positive ER status with a precision of 94% while classifying 13% of the patients. To compensate for the reduced interpretability of the model that comes along with end-to-end training, this work applies Layer-wise Relevance Propagation (LRP) to determine the relevant parts of the images a posteriori, commonly visualized as a heatmap overlayed with the input image. We found that nuclear and stromal morphology and lymphocyte infiltration play an important role in the classification of the ER status. This demonstrates that interpretable machine learning can be a vital tool for validating and generating hypotheses about morphological biomarkers.

Keywords

Digital pathology Deep learning Explainable AI 

References

  1. 1.
    Alber, M.: Software and application patterns for explanation methods. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 399–433. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-28954-6_22Google Scholar
  2. 2.
    Alber, M., et al.: Innvestigate neural networks!. J. Mach. Learn. Res. 20(93), 1–8 (2019)MathSciNetGoogle Scholar
  3. 3.
    Arpino, G., Bardou, V.J., Clark, G.M., Elledge, R.M.: Infiltrating lobular carcinoma of the breast: tumor characteristics and clinical outcome. Breast Cancer Res. 6(3), R149 (2004).  https://doi.org/10.1186/bcr767Google Scholar
  4. 4.
    Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)Google Scholar
  5. 5.
    Beck, A.H., et al.: Systematic analysis of breast cancer morphology uncovers stromal features associated with survival. Sci. Transl. Med. 3(108), 108ra113–108ra113 (2011)Google Scholar
  6. 6.
    Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)Google Scholar
  7. 7.
    Binder, A., et al.: Towards computational fluorescence microscopy: machine learning-based integrated prediction of morphological and molecular tumor profiles. arXiv preprint arXiv:1805.11178 (2018)
  8. 8.
    Budczies, J., et al.: Classical pathology and mutational load of breast cancer-integration of two worlds. J. Pathol. Clin. Res. 1(4), 225–238 (2015)Google Scholar
  9. 9.
    Cortes, C., Mohri, M.: AUC optimization vs. error rate minimization. In: Advances in Neural Information Processing Systems, pp. 313–320 (2004)Google Scholar
  10. 10.
    Couture, H.D., et al.: Image analysis with deep learning to predict breast cancer grade, ER status, histologic subtype, and intrinsic subtype. NPJ Breast Cancer 4, 30 (2018)Google Scholar
  11. 11.
    Dombrowski, A.K., Alber, M., Anders, C., Ackermann, M., Müller, K.R., Kessel, P.: Explanations can be manipulated and geometry is to blame. In: Advances in Neural Information Processing Systems, pp. 13567–13578 (2019)Google Scholar
  12. 12.
    Elston, C.W., Ellis, I.O.: Pathological prognostic factors in breast cancer. i. The value of histological grade in breast cancer: experience from a large study with long-term follow-up. Histopathology 19(5), 403–410 (1991)Google Scholar
  13. 13.
    Hägele, M., et al.: Resolving challenges in deep learning-based analyses of histopathological images using explanation methods. Sci. Rep. 10(1), 1–12 (2020)Google Scholar
  14. 14.
    Hammond, M.E.H., et al.: American society of clinical oncology/college of american pathologists guideline recommendations for immunohistochemical testing of estrogen and progesterone receptors in breast cancer (unabridged version). Archiv. Pathol. Lab. Med. 134(7), e48–e72 (2010)Google Scholar
  15. 15.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd edn. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-0-387-84858-7zbMATHGoogle Scholar
  16. 16.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  17. 17.
    Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H.: Causability and explainabilty of artificial intelligence in medicine. Wiley Interdisc. Rev. Data Min. Knowl. Discov. 9(4), e1312 (2019)Google Scholar
  18. 18.
    Hooker, S., Erhan, D., Kindermans, P.J., Kim, B.: Evaluating feature importance estimates. arXiv preprint arXiv:1806.10758 (2018)
  19. 19.
    Hui, L.Y.W., Binder, A.: BatchNorm decomposition for deep neural network interpretation. In: Rojas, I., Joya, G., Catala, A. (eds.) IWANN 2019. LNCS, vol. 11507, pp. 280–291. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-20518-8_24Google Scholar
  20. 20.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
  21. 21.
    Jurmeister, P., et al.: Machine learning analysis of DNA methylation profiles distinguishes primary lung squamous cell carcinomas from head and neck metastases. Sci. Transl. Med. 11(509), eaaw8513 (2019). 11 September 2019,  https://doi.org/10.1126/scitranslmed.aaw8513
  22. 22.
    Kindermans, P.J., et al.: Learning how to explain neural networks: PatternNet and PatternAttribution. arXiv preprint arXiv:1705.05598 (2017)
  23. 23.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  24. 24.
    Klauschen, F., et al.: Scoring of tumor-infiltrating lymphocytes: From visual estimation to machine learning. Semin. Cancer Biol. 52, 151–157 (2018)Google Scholar
  25. 25.
    Korbar, B., et al.: Looking under the hood: deep neural network visualization to interpret whole-slide image analysis outcomes for colorectal polyps. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR), pp. 821–827 (2017)Google Scholar
  26. 26.
    Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., Müller, K.R.: Unmasking clever hans predictors and assessing what machines really learn. Nat. Commun. 10(1), 1096 (2019)Google Scholar
  27. 27.
    Millis, R.R.: Correlation of hormone receptors with pathological features in human breast cancer. Cancer 46(S12), 2869–2871 (1980).  https://doi.org/10.1002/1097-0142(19801215)46:12+<2869::AID-CNCR2820461426>3.0.CO;2-QGoogle Scholar
  28. 28.
    Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 193–209. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-28954-6_10Google Scholar
  29. 29.
    Montavon, G., Samek, W., Müller, K.R.: Methods for interpreting and understanding deep neural networks. Digit. Signal Proc. 73, 1–15 (2018)MathSciNetGoogle Scholar
  30. 30.
    Osborne, C.K., Yochmowitz, M.G., Knight III, W.A., McGuire, W.L.: The value of estrogen and progesterone receptors in the treatment of breast cancer. Cancer 46(S12), 2884–2888 (1980)Google Scholar
  31. 31.
    Platt, J.: Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Adv. Large Margin Classif. 10(3), 61–74 (1999)Google Scholar
  32. 32.
    Rawat, R.R., Ruderman, D., Macklin, P., Rimm, D.L., Agus, D.B.: Correlating nuclear morphometric patterns with estrogen receptor status in breast cancer pathologic specimens. NPJ Breast Cancer 4, 32 (2018)Google Scholar
  33. 33.
    Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.): Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-28954-6Google Scholar
  34. 34.
    Samek, W., Binder, A., Montavon, G., Lapuschkin, S., Müller, K.R.: Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learn. Syst. 28(11), 2660–2673 (2016)MathSciNetGoogle Scholar
  35. 35.
    Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)Google Scholar
  36. 36.
    Shamai, G., Binenbaum, Y., Slossberg, R., Duek, I., Gil, Z., Kimmel, R.: Artificial intelligence algorithms to assess hormonal status from tissue microarrays in patients with breast cancer. JAMA Netw. Open 2(7), e197700–e197700 (2019)Google Scholar
  37. 37.
    Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: SmoothGrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017)
  38. 38.
    Vahadane, A., et al.: Structure-preserving color normalization and sparse stain separation for histological images. IEEE Trans. Med. Imaging 35(8), 1962–1971 (2016)Google Scholar
  39. 39.
    Varma, S., Simon, R.: Bias in error estimation when using cross-validation for model selection. BMC Bioinformatics 7(1), 91 (2006)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Philipp Seegerer
    • 1
  • Alexander Binder
    • 2
    Email author
  • René Saitenmacher
    • 1
  • Michael Bockmayr
    • 3
    • 6
  • Maximilian Alber
    • 3
  • Philipp Jurmeister
    • 3
  • Frederick Klauschen
    • 3
  • Klaus-Robert Müller
    • 1
    • 4
    • 5
  1. 1.Machine Learning GroupTechnical University BerlinBerlinGermany
  2. 2.Singapore University of Technology and Design (SUTD)SingaporeSingapore
  3. 3.Institute of PathologyCharité University HospitalBerlinGermany
  4. 4.Department of Brain and Cognitive EngineeringKorea UniversitySeoulKorea
  5. 5.Max-Planck-Institute for InformaticsSaarbrückenGermany
  6. 6.Department of Pediatric Hematology and OncologyUniversity Medical Center Hamburg-EppendorfHamburgGermany

Personalised recommendations