Skip to main content

Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2020)

Abstract

Computer Vision, and hence Artificial Intelligence-based extraction of information from images, has increasingly received attention over the last years, for instance in medical diagnostics. While the algorithms’ complexity is a reason for their increased performance, it also leads to the ‘black box’ problem, consequently decreasing trust towards AI. In this regard, “Explainable Artificial Intelligence” (XAI) allows to open that black box and to improve the degree of AI transparency. In this paper, we first discuss the theoretical impact of explainability on trust towards AI, followed by showcasing how the usage of XAI in a health-related setting can look like. More specifically, we show how XAI can be applied to understand why Computer Vision, based on deep learning, did or did not detect a disease (malaria) on image data (thin blood smear slide images). Furthermore, we investigate, how XAI can be used to compare the detection strategy of two different deep learning models often used for Computer Vision: Convolutional Neural Network and Multi-Layer Perceptron. Our empirical results show that i) the AI sometimes used questionable or irrelevant data features of an image to detect malaria (even if correctly predicted), and ii) that there may be significant discrepancies in how different deep learning models explain the same prediction. Our theoretical discussion highlights that XAI can support trust in Computer Vision systems, and AI systems in general, especially through an increased understandability and predictability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Grace, K., Salvatier, J., Dafoe, A., Zhang, B., Evans, O.: Viewpoint: when will AI exceed human performance? Evidence from AI experts. J. Artif. Intell. Res. 62, 729–754 (2018)

    Article  MathSciNet  Google Scholar 

  2. Maedche, A., et al.: AI-based digital assistants. Bus. Inf. Syst. Eng. 61(4), 535–544 (2019)

    Article  Google Scholar 

  3. Ciresan, D., Meier, U., Masci, J., Schmidhuber, J.: Multi-column deep neural network for traffic sign classification. Neural Netw. 32, 333–338 (2012)

    Article  Google Scholar 

  4. Lu, Y.: Artificial intelligence: a survey on evolution, models, applications and future trends. J. Manag. Anal. 6(1), 1–29 (2019)

    MathSciNet  Google Scholar 

  5. Kulkarni, S., Seneviratne, N., Baig, M.S., Khan, A.H.H.: Artificial intelligence in medicine: where are we now? Acad. Radiol. 27(1), 62–70 (2020)

    Article  Google Scholar 

  6. Rajaraman, S., et al.: Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images. PeerJ 6, 1–17 (2018)

    Article  Google Scholar 

  7. Rajaraman, S., Jaeger, S., Antani, S.K.: Performance evaluation of deep neural ensembles toward malaria parasite detection in thin-blood smear images. PeerJ 7, 1–16 (2019)

    Article  Google Scholar 

  8. Teso, S., Kersting, K.: Explanatory interactive machine learning. In: Conitzer, V., Hadfield, G., Vallor, S. (eds.) AIES’19: AAAI/ACM Conference on AI, Ethics, and Society, pp. 239–245. Association for Computing Machinery, New York (2019)

    Google Scholar 

  9. Schwartz-Ziv, R., Tishby, N.: Opening the blackbox of Deep Neural Networks via Information (2017). https://arxiv.org/abs/1703.00810. Accessed 09 Feb 2020

  10. Zednik, C.: Solving the black box problem: a normative framework for explainable artificial intelligence. Philos. Technol. 1–24 (2019)

    Google Scholar 

  11. Gunning, D., Aha, D.W.: DARPA’s Explainable Artificial Intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019)

    Article  Google Scholar 

  12. DARPA: Explainable Artificial Intelligence (XAI), DARPA program Update 2017, pp. 1–36 (2017). https://www.darpa.mil/attachments/XAIProgramUpdate.pdf. Accessed 27 Jan 2020

  13. Corritore, C.L., Kracher, B., Wiedenbeck, S.: Online trust: concepts, evolving themes, a model. Int. J. Hum. Comput. Stud. 58(6), 737–758 (2003)

    Article  Google Scholar 

  14. Söllner, M., Hoffmann, A., Hoffmann, H., Wacker, A., Leimeister, J.M.: Understanding the formation of trust in it artifacts. In: George, J.F. (eds.) Proceedings of the 33rd International Conference on Information Systems, ICIS 2012, pp. 1–18 (2012)

    Google Scholar 

  15. Jayaraman, P.P., et al.: Healthcare 4.0: a review of frontiers in digital health. Wiley Interdisc. Rev. Data Min. Knowl. Discov. 10(2), e1350 (2019)

    Google Scholar 

  16. Gilbert, F.J., Smye, S.W., Schönlieb, C.-B.: Artificial intelligence in clinical imaging: a health system approach. Clin. Radiol. 75(1), 3–6 (2020)

    Article  Google Scholar 

  17. Meske, C., Amojo, I.: Social bots as initiators for human interaction in enterprise social networks. In: Proceedings of the 29th Australasian Conference on Information Systems (ACIS), paper 35, pp. 1–22 (2018)

    Google Scholar 

  18. Kemppainen, L., Pikkarainen, M., Hurmelinna-Laukkanen, P., Reponen, J.: Connected health innovation: data access challenges in the interface of AI companies and hospitals. Technol. Innov. Manag. Rev. 9(12), 43–55 (2019)

    Article  Google Scholar 

  19. Poncette, A.-S., Meske, C., Mosch, L., Balzer, F.: How to overcome barriers for the implementation of new information technologies in intensive care medicine. In: Yamamoto, S., Mori, H. (eds.) HCII 2019. LNCS, vol. 11570, pp. 534–546. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22649-7_43

    Chapter  Google Scholar 

  20. Stieglitz, S., Meske, C., Ross, B., Mirbabaie, M.: Going back in time to predict the future - the complex role of the data collection period in social media analytics. Inf. Syst. Front. 22(2), 395–409 (2018). https://doi.org/10.1007/s10796-018-9867-2

    Article  Google Scholar 

  21. Walsh, S., et al.: Decision support systems in oncology. JCO Clin. Cancer Inf. 3, 1–9 (2019)

    Google Scholar 

  22. Ferroni, P., et al.: Breast cancer prognosis using a machine learning approach. Cancers 11(3), 328 (2019)

    Article  Google Scholar 

  23. Song, D.-Y., Kim, S.Y., Bong, G., Kim, J.M., Yoo, H.J.: The use of artificial intelligence in screening and diagnosis of autism spectrum disorder: a literature review. J. Korean Acad. Child Adolesc. Psychiatry 30(4), 145–152 (2019)

    Article  Google Scholar 

  24. Woldaregay, A.Z., et al.: Data-driven modeling and prediction of blood glucose dynamics: machine learning applications in type 1 diabetes. Artif. Intell. Med. 98, 109–134 (2019)

    Article  Google Scholar 

  25. Gi-Martin, M., Montero, J.M., San-Segundo, R.: Parkinson’s disease detection from drawing movements using convolutional neural networks. Electronics 8(8), 907 (2019)

    Article  Google Scholar 

  26. Spathis, D., Vlamos, P.: Diagnosing asthma and chronic obstructive pulmonary disease with machine learning. Health Inf. J. 25(3), 811–827 (2019)

    Article  Google Scholar 

  27. Eggerth, A., Hayn, D., Schreier, G.: Medication management needs information and communications technology-based approaches, including telehealth and artificial intelligence. Brit. J. Clin. Pharmacol. 1–8 (2019)

    Google Scholar 

  28. Khanna, S.: Artificial intelligence: contemporary applications and future compass. Int. Dent. J. 60(4), 269–272 (2010)

    Google Scholar 

  29. Esteva, A., et al.: A guide to deep learning in healthcare. Nat. Med. 25(1), 24–29 (2019)

    Article  Google Scholar 

  30. Lewis, S.J., Gandomkar, Z., Brennan, P.C.: Artificial intelligence in medical imaging practice: looking to the future. J. Med. Radiat. Sci. 66, 292–295 (2019)

    Article  Google Scholar 

  31. Jiang, F., et al.: Artificial intelligence in healthcare: past, present and future. Stroke Vascul. Neurol. 2(4), 230–243 (2017)

    Article  Google Scholar 

  32. Son, J., Shin, J.Y., Kim, H.D., Jung, K.-H., Park, K.H., Park, S.J.: Development and validation of deep learning models for screening multiple abnormal findings in retinal fundus images. Ophthalmology 127(1), 85–94 (2019)

    Article  Google Scholar 

  33. Chen, M., Zhou, P., Wu, D., Hu, L., Hassan, M.M., Alamri, A.: AI-Skin: skin disease recognition based on self-learning and wide data collection through a closed-loop framework. Inf. Fusion 54, 1–9 (2020)

    Article  Google Scholar 

  34. Valliani, A.A., Ranti, D., Oermann, E.K.: Deep learning in neurology: a systematic review. Neurol. Ther. 8(2), 351–365 (2019)

    Article  Google Scholar 

  35. Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65(6), 386–408 (1958)

    Article  Google Scholar 

  36. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    MATH  Google Scholar 

  37. Jang, D.-H., et al.: Developing neural network models for early detection of cardiac arrest in emergency department. Am. J. Emerg. Med. 38(1), 43–49 (2020)

    Article  Google Scholar 

  38. Kim, M., et al.: Deep learning medical imaging. Neurospine 16(4), 657–668 (2019)

    Article  Google Scholar 

  39. Saba, L., et al.: The present and future of deep learning in radiology. Eur. J. Radiol. 114, 14–24 (2019)

    Article  Google Scholar 

  40. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on Explainable Artificial Intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  41. Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., Yang, G.-Z.: XAI – explainable artificial intelligence. Sci. Robot. 4(37), eaay7120 (2019)

    Article  Google Scholar 

  42. Dosilovic, F.K., Brcic, M., Hlupic, N.: Explainable artificial intelligence: a survey. In: Proceedings of 41st International Convention on Information and Communication Technology, Electronics and Microelectronics, Opatija Croatia, pp. 210–215 (2018)

    Google Scholar 

  43. Kühl, N., Lobana, J., Meske, C.: Do you comply with AI? Personalized explanations of learning algorithms and their impact on employees compliance behavior. In: 40th International Conference on Information Systems, pp. 1–6 (2019, forthcoming)

    Google Scholar 

  44. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5), 93 (2018)

    Google Scholar 

  45. Ras, G., van Gerven, M., Haselager, P.: Explanation methods in deep learning: users, values, concerns and challenges 1–15 (2018). arXiv:1803.07517. Accessed 27 Jan 2020

  46. Meske, C.: Digital workplace transformation – on the role of self-determination in the context of transforming work environments. In: Proceedings of the 27th European Conference on Information Systems (ECIS), pp. 1–18 (2019)

    Google Scholar 

  47. Yan, Z., Kantola, R., Zhang, P.: A research model for human-computer trust interaction. In: Proceedings of the 2011 IEEE 10th International Conference on Trust, Security and Privacy in Computing and Communications, pp. 274–281 (2011)

    Google Scholar 

  48. Mühl, K., Strauch, C., Grabmaier, C., Reithinger, S., Huckauf, A., Baumann, M.: Get ready for being chauffeured: passenger’s preferences and trust while being driven by human automation. Hum. Factors, pp. 1–17 (2019)

    Google Scholar 

  49. Qasim, A.F., Meziane, F., Aspin, R.: Digital watermarking: applicability for developing trust in medical imaging workflows state of the art review. Comput. Sci. Rev. 27, 45–60 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  50. Gulati, S., Sousa, S., Lamas, D.: Design, development and evaluation of a human-computer trust scale. Behav. Technol. 38(10), 1004–1015 (2019)

    Article  Google Scholar 

  51. McKnight, D.H., Carter, M., Thatcher, J.B., Clay, P.F.: Trust in specific technology: an investigation of its components and measures. ACM Trans. Manag. Inf. Syst. (TMIS) 2(2), 12–32 (2011)

    Google Scholar 

  52. Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995)

    Article  Google Scholar 

  53. Muir, B.M., Moray, N.: Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3), 429–460 (1996)

    Article  Google Scholar 

  54. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  55. de Sousa, I.P., et al.: Local interpretable model-agnostic explanations for classification of lymph node metastases. Sensors 19(13), 2969 (2019)

    Article  Google Scholar 

  56. Weitz, K., Hassan, T., Schmid, U., Garbas, J.-U.: Deep-learned faces of pain and emotions: elucidating the differences of facial expressions with the help of explainable AI methods. TM-Tech. Mess. 86(7–8), 404–412 (2019)

    Article  Google Scholar 

  57. Kaggle Malaria Cell Images Dataset. https://www.kaggle.com/iarunava/cell-images-for-detecting-malaria. Accessed 27 Jan 2020

  58. National Library of Medicine – Malaria Datasets. https://lhncbc.nlm.nih.gov/publication/pub9932. Accessed 27 Jan 2020

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian Meske .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Meske, C., Bunde, E. (2020). Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support. In: Degen, H., Reinerman-Jones, L. (eds) Artificial Intelligence in HCI. HCII 2020. Lecture Notes in Computer Science(), vol 12217. Springer, Cham. https://doi.org/10.1007/978-3-030-50334-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-50334-5_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-50333-8

  • Online ISBN: 978-3-030-50334-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics