Advertisement

Explainable AI for Healthcare: From Black Box to Interpretable Models

  • Amina AdadiEmail author
  • Mohammed Berrada
Conference paper
  • 171 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1076)

Abstract

As artificial intelligence penetrates deeper into work and personal life, it raises questions about trust and transparency. These questions are of greater consequence in healthcare where decisions are literally a matter of life and death. In this paper, we reflect on recent investigations about the interpretability and explainability of artificial intelligence methods and discuss their impact on medicine and healthcare.

Keywords

Explainable AI Machine learning Healthcare 

References

  1. 1.
    Adadi, A., Berrada, B.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)CrossRefGoogle Scholar
  2. 2.
    Liu, X., Chen, K., Wu, T., Weidman, D., Lure, F., Li, J.: Use of multimodality imaging and artificial intelligence for diagnosis and prognosis of early stages of Alzheimer’s disease. Transl. Res. 194, 56–67 (2018)CrossRefGoogle Scholar
  3. 3.
    Kit-Kay, M., Mallikarjuna, R.P.: Artificial intelligence in drug development: present status and future prospects. Drug Discov. Today (2018)Google Scholar
  4. 4.
    Kumar, R.: Epidemic outbreak prediction using artificial intelligence. Int. J. Inf. Technol. Comput. Sci. 10, 49–64 (2018)Google Scholar
  5. 5.
    Baldwin, J.L., Singh, H., Sittig, D.F., Giardina, T.D.: Healthcare, patient portals and health apps: pitfalls, promises, and what one might learn from the other. Healthcare 5, 81–85 (2017)CrossRefGoogle Scholar
  6. 6.
    Hsieh, F.S., Lin, J.B., Scheduling patients in hospitals based on multi-agent systems. Modern Advances in Applied Intelligence, pp. 32–42 (2014)Google Scholar
  7. 7.
    Swartout, W.R., Moore, J.D.: Explanation in Expert Systems: A Survey. University of Southern California (1988)Google Scholar
  8. 8.
    Krening, S., Harrison, B., Feigh, K., Isbell, C., Riedl, M., Thomaz, A.: Learning from explanations using sentiment and advice in RL. IEEE Trans. Cogn. Dev. Syst. (2016)Google Scholar
  9. 9.
    Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)Google Scholar
  10. 10.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems (NIPS), pp. 3111–3119 (2013)Google Scholar
  11. 11.
    Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)Google Scholar
  12. 12.
    Lei, J., G’Sell, M., Rinaldo, A., Tibshirani, R.J., Wasserman, L.: Distribution-free predictive inference for regression. J. Am. Stat. Ass. pp. 1–18 (2018)Google Scholar
  13. 13.
    Ahmad, M.A., Eckert, C., Teredesai A., Kumar, V.: Explainable AI in Healthcare. Available on line at https://learning.acm.org/webinars/healthcareai (2018)
  14. 14.
    Monteath, I., Sheh, R.: Assisted and incremental medical diagnosis using explainable artificial intelligence. In: Proceedings of the 2nd Workshop on Explainable Artificial Intelligence, pp. 104–108 (2018)Google Scholar
  15. 15.
    Kocbek, S., Kocbek, P., Stozer, A., Zupanic, T., Groza, T., Stiglic, G.: Building interpretable models for polypharmacy prediction in older chronic patients based on drug prescription records. PeerJ Life Environ. Sci. (2018)Google Scholar
  16. 16.
    Zhenga, Q., Delingettea, H., Ayache, N.: Explainable cardiac pathology classification on cine MRI with motion characterization by semi-supervised learning of apparent flow. Available online at https://arxiv.org/pdf/1811.03433.pdf (2018)
  17. 17.
    Hicks, S.A., Eskeland, S., Lux, M., de Lange, T., Randel, K.R., Jeppsson. M., Pogorelov. K., Halvorsen. P., Riegler. M.: Mimir: an automatic reporting and reasoning system for deep learning based analysis in the medical domain. In: Proceedings of the 9th ACM Multimedia Systems Conference (MMSys), pp. 369–374 (2018)Google Scholar
  18. 18.
    Wu, J., Peck, D., Hsieh, S., Dialani, V., Lehman, C.D., Zhou, B., Syrgkanis, V., Mackey, L., Patterson, G.: Expert identification of visual primitives used by CNNs during mammogram classification. In: SPIE Medical Imaging 2018: Computer-Aided Diagnosis (2018)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Computer and Interdisciplinary Physics Laboratory (LIPI)ENS Fez Sidi Mohammed Ben Abdellah UniversityFezMorocco

Personalised recommendations