Advertisement

HistoMapr: An Explainable AI (xAI) Platform for Computational Pathology Solutions

  • Akif Burak Tosun
  • Filippo Pullara
  • Michael J. Becich
  • D. Lansing Taylor
  • S. Chakra Chennubhotla
  • Jeffrey L. FineEmail author
Chapter
  • 84 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12090)

Abstract

Pathologists are adopting whole slide images (WSIs) for diagnostic purposes. While doing so, pathologists should have all the information needed to make best diagnoses rapidly, while supervising computational pathology tools in real-time. Computational pathology has great potential for augmenting pathologists’ accuracy and efficiency, but concern exists regarding trust for ‘black-box AI’ solutions. Explainable AI (xAI) can reveal underlying reasons for its results, to promote safety, reliability, and accountability for critical tasks such as pathology diagnosis. Built on a hierarchy of computational and traditional image analysis algorithms, we present the development of our proprietary xAI software platform, HistoMapr, for pathologists to improve their efficiency and accuracy when viewing WSIs. HistoMapr and xAI represent a powerful and transparent alternative to ‘black-box’ AI. HistoMapr previews WSIs then presents key diagnostic areas first in an interactive, explainable fashion. Pathologists can access xAI features via a “Why?” button in the interface. Furthermore, two critical early application examples are presented: 1) Intelligent triaging that involves xAI estimation of difficulty for new cases to be forwarded to subspecialists or generalist pathologists; 2) Retrospective quality assurance entails detection of potential discrepancies between finalized results and xAI reviews. Finally, a prototype is presented for atypical ductal hyperplasia, a diagnostic challenge in breast pathology, where xAI descriptions were based on computational pipeline image results.

Keywords

Computational pathology Artificial Intelligence (AI) Explainable AI (xAI) Breast pathology Digital pathology Computer assisted diagnosis 

References

  1. 1.
    Food and Drug Administration, U.S.A.: Intellisite3 pathology solution (pips, Philips medical systems) (2017)Google Scholar
  2. 2.
    Food and Drug Administration, U.S.A.: Aperio AT2 DX system (2019)Google Scholar
  3. 3.
    Pantanowitz, L., Sharma, A., Carter, A.B., Kurc, T., Sussman, A., Saltz, J.: Twenty years of digital pathology: an overview of the road travelled, what is on the horizon, and the emergence of vendor-neutral archives. J. Pathol. Inf. 9 (2018, online)Google Scholar
  4. 4.
    Louis, D.N., et al.: Computational pathology: a path ahead. Arch. Pathol. Lab. Med. 140(1), 41–50 (2016)CrossRefGoogle Scholar
  5. 5.
    Fuchs, T.J., Buhmann, J.M.: Computational pathology: challenges and promises for tissue analysis. Comput. Med. Imaging Graph. 35(7–8), 515–530 (2011)CrossRefGoogle Scholar
  6. 6.
    Kumar, N., Verma, R., Sharma, S., Bhargava, S., Vahadane, A., Sethi, A.: A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Trans. Med. Imaging 36(7), 1550–1560 (2017)CrossRefGoogle Scholar
  7. 7.
    Eisses, J.F., et al.: A computer-based automated algorithm for assessing acinar cell loss after experimental pancreatitis. PloS One 9(10) (2014, online)Google Scholar
  8. 8.
    Mercan, E., Mehta, S., Bartlett, J., Shapiro, L.G., Weaver, D.L., Elmore, J.G.: Assessment of machine learning of breast pathology structures for automated differentiation of breast cancer and high-risk proliferative lesions. JAMA Netw. Open 2(8), e198777 (2019)CrossRefGoogle Scholar
  9. 9.
    Tosun, A.B., Sokmensuer, C., Gunduz-Demir, C.: Unsupervised tissue image segmentation through object-oriented texture. In: 2010 20th International Conference on Pattern Recognition, pp. 2516–2519. IEEE (2010)Google Scholar
  10. 10.
    Li, H., Whitney, J., Bera, K., Gilmore, H., Thorat, M.A., Badve, S., Madabhushi, A.: Quantitative nuclear histomorphometric features are predictive of oncotype DX risk categories in ductal carcinoma in situ: preliminary findings. Breast Cancer Res. 21(1), 114 (2019)CrossRefGoogle Scholar
  11. 11.
    Huang, H., et al.: Cancer diagnosis by nuclear morphometry using spatial information. Pattern Recogn. Lett. 42, 115–121 (2014)CrossRefGoogle Scholar
  12. 12.
    Dong, F., et al.: Computational pathology to discriminate benign from malignant intraductal proliferations of the breast. PloS One 9(12) (2014, online)Google Scholar
  13. 13.
    Nawaz, S., Yuan, Y.: Computational pathology: exploring the spatial dimension of tumor ecology. Cancer Lett. 380(1), 296–303 (2016)CrossRefGoogle Scholar
  14. 14.
    Fuchs, T.J., Wild, P.J., Moch, H., Buhmann, J.M.: Computational pathology analysis of tissue microarrays predicts survival of renal clear cell carcinoma patients. In: Metaxas, D., Axel, L., Fichtinger, G., Székely, G. (eds.) MICCAI 2008. LNCS, vol. 5242, pp. 1–8. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-85990-1_1CrossRefGoogle Scholar
  15. 15.
    Tosun, A.B., Yergiyev, O., Kolouri, S., Silverman, J.F., Rohde, G.K.: Detection of malignant mesothelioma using nuclear structure of mesothelial cells in effusion cytology specimens. Cytometry Part A 87(4), 326–333 (2015)CrossRefGoogle Scholar
  16. 16.
    Farahani, N., Liu, Z., Jutt, D., Fine, J.L.: Pathologists’ computer-assisted diagnosis: a mock-up of a prototype information system to facilitate automation of pathology sign-out. Arch. Pathol. Lab. Med. 141(10), 1413–1420 (2017)CrossRefGoogle Scholar
  17. 17.
    Fine, J.L.: 21st century workflow: a proposal. J. Pathol. Inf. 5 (2014, online)Google Scholar
  18. 18.
    Tosun, A.B., et al.: Histological detection of high-risk benign breast lesions from whole slide images. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 144–152. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_17CrossRefGoogle Scholar
  19. 19.
    Li, C., Wang, X., Liu, W., Latecki, L.J.: DeepMitosis: mitosis detection via deep detection, verification and segmentation networks. Med. Image Anal. 45, 121–133 (2018)CrossRefGoogle Scholar
  20. 20.
    Janowczyk, A., Madabhushi, A.: Deep learning for digital pathology image analysis: a comprehensive tutorial with selected use cases. J. Pathol. Inf. 7 (2016, online)Google Scholar
  21. 21.
    Aresta, G., et al.: BACH: grand challenge on breast cancer histology images. Med. Image Anal. 56, 122–139 (2019)CrossRefGoogle Scholar
  22. 22.
    Liu, Y., Gadepalli, K., et al.: Detecting cancer metastases on gigapixel pathology images. arXiv preprint arXiv:1703.02442 (2017)
  23. 23.
    Bejnordi, B.E., et al.: Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images. J. Med. Imaging (Bellingham) 4(4), 044504 (2017)Google Scholar
  24. 24.
    Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)CrossRefGoogle Scholar
  25. 25.
    Gunning, D.: Explainable artificial intelligence (xAI). Defense Advanced Research Projects Agency (DARPA), nd Web 2 (2017)Google Scholar
  26. 26.
    Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., Yang, G.Z.: XAI–explainable artificial intelligence. Sci. Robot. 4(37) (2019, online)Google Scholar
  27. 27.
    Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for explainable AI: challenges and prospects. arXiv preprint arXiv:1812.04608 (2018)
  28. 28.
    Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296 (2017)
  29. 29.
    Uttam, S., et al.: Spatial domain analysis predicts risk of colorectal cancer recurrence and infers associated tumor microenvironment networks. bioRxiv (2019)Google Scholar
  30. 30.
    USCAP: United States and Canadian academy of pathology (USCAP) annual meetingGoogle Scholar
  31. 31.
    DPA: Pathology visions conferenceGoogle Scholar
  32. 32.
    Elmore, J.G., et al.: Diagnostic concordance among pathologists interpreting breast biopsy specimens. JAMA 313(11), 1122–1132 (2015)CrossRefGoogle Scholar
  33. 33.
    Montalto, M.C.: An industry perspective: an update on the adoption of whole slide imaging. J. Pathol. Inf. 7 (2016, online)Google Scholar
  34. 34.
    Jones, T., Nguyen, L., Torun, A.B., Chennubhotla, S., Fine, J.L.: Computational pathology versus manual microscopy: comparison based on workflow simulations of breast core biopsies. In: Laboratory Investigation, vol. 97, Nature Publishing Group 75 Varick St, 9th Flr, New York, NY, 10013-1917 USA, pp. 398A–398A (2017)Google Scholar
  35. 35.
    Simpson, J.F., Boulos, F.I.: Differential diagnosis of proliferative breast lesions. Surg. Pathol. Clin. 2(2), 235–246 (2009)CrossRefGoogle Scholar
  36. 36.
    Onega, T., et al.: The diagnostic challenge of low-grade ductal carcinoma in situ. Eur. J. Cancer 80, 39–47 (2017)CrossRefGoogle Scholar
  37. 37.
    Nguyen, L., Tosun, A.B., Fine, J.L., Taylor, D.L., Chennubhotla, S.C.: Architectural patterns for differential diagnosis of proliferative breast lesions from histopathological images. In: IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pp. 152–155. IEEE (2017)Google Scholar
  38. 38.
    Nguyen, L., Tosun, A.B., Fine, J.L., Lee, A.V., Taylor, D.L., Chennubhotla, S.C.: Spatial statistics for segmenting histological structures in H&E stained tissue images. IEEE Trans. Med. Imaging 36(7), 1522–1532 (2017)CrossRefGoogle Scholar
  39. 39.
    Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 427–436 (2015)Google Scholar
  40. 40.
    Tizhoosh, H.R., Pantanowitz, L.: Artificial intelligence and digital pathology: challenges and opportunities. J. Pathol. Inf. 9 (2018, online)Google Scholar
  41. 41.
    Hudec, M., Bednárová, E., Holzinger, A.: Augmenting statistical data dissemination by short quantified sentences of natural language. J. Off. Stat. 34(4), 981–1010 (2018)CrossRefGoogle Scholar
  42. 42.
    European Commission: Ethics guidelines for trustworthy AI (European commission, 2019) (2019)Google Scholar
  43. 43.
    US: The white house, executive office of the president of the United States, national artificial intelligence research and development strategic plan (2019)Google Scholar
  44. 44.
    Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:1712.09923 (2017)
  45. 45.
    Floridi, L.: Establishing the rules for building trustworthy AI. Nat. Mach. Intell. 1(6), 261–262 (2019)CrossRefGoogle Scholar
  46. 46.
    Evans, A.J., et al.: Us food and drug administration approval of whole slide imaging for primary diagnosis: a key milestone is reached and new questions are raised. Arch. Pathol. Lab. Med. 142(11), 1383–1387 (2018)CrossRefGoogle Scholar
  47. 47.
    Ribeiro, M.T., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning. arXiv preprint arXiv:1606.05386 (2016)
  48. 48.
    Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: how i learnt to stop worrying and love the social and behavioural sciences. arXiv preprint arXiv:1712.00547 (2017)
  49. 49.
    Montavon, G., Samek, W., Müller, K.R.: Methods for interpreting and understanding deep neural networks. Digit. Signal Proc. 73, 1–15 (2018) MathSciNetCrossRefGoogle Scholar
  50. 50.
    Core, M.G., Lane, H.C., Van Lent, M., Gomboc, D., Solomon, S., Rosenberg, M.: Building explainable artificial intelligence systems. In: AAAI, pp. 1766–1773 (2006)Google Scholar
  51. 51.
    Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)Google Scholar
  52. 52.
    Holzinger, A., Carrington, A., Müller, H.: Measuring the quality of explanations: the system causability scale (SCS). KI-Künstliche Intell. 34(2), 193–198 (2020)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Akif Burak Tosun
    • 1
  • Filippo Pullara
    • 1
  • Michael J. Becich
    • 1
    • 2
  • D. Lansing Taylor
    • 1
    • 3
    • 4
  • S. Chakra Chennubhotla
    • 1
    • 4
  • Jeffrey L. Fine
    • 1
    • 5
    • 6
    Email author
  1. 1.SpIntellx, Inc.PittsburghUSA
  2. 2.Department of Biomedical InformaticsUniversity of Pittsburgh School of MedicinePittsburghUSA
  3. 3.Drug Discovery InstituteUniversity of PittsburghPittsburghUSA
  4. 4.Department of Computational and Systems BiologyUniversity of Pittsburgh School of MedicinePittsburghUSA
  5. 5.Department of PathologyUniversity of Pittsburgh School of MedicinePittsburghUSA
  6. 6.UPMC Magee-Womens HospitalPittsburghUSA

Personalised recommendations