Skip to main content

Biases Affecting Human Decision Making in AI-Supported Second Opinion Settings

  • Conference paper
  • First Online:
Modeling Decisions for Artificial Intelligence (MDAI 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11676))

Abstract

In this paper we focus on a still neglected consequence of the adoption of AI in diagnostic settings: the increase of cases in which a human decision maker is called to settle a divergence between a human doctor and the AI, i.e., second opinion requests. We designed a user study, involving more than 70 medical doctors, to understand if the second opinions are affected by the first ones and whether the decision makers tend to trust the human interpretation more than the machine’s one. We observed significant effects on decision accuracy and a sort of “prejudice against the machine”, which varies with respect to the respondent profile. Some implications for sounder second opinion settings are given in the light of the results of this study.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    These kind of settings in the cardiological domain is also called ECG overreading.

  2. 2.

    Other contributions call this bias “truth bias” [18], which is defined as the tendency of someone to believe that “others are telling the truth more often than they actually are” and hence to confirm what they say, mainly to spare themselves feelings of discomfort.

  3. 3.

    ECG Wave-Maven, Copyright (c) 2001–2016 Beth Israel Deaconess Medical Center. All rights reserved. https://ecg.bidmc.harvard.edu/maven/mavenmain.asp Last Accessed: 17th May 2018.

  4. 4.

    Accuracy was judged by two cardiologists according to whether the diagnosis was either the same given by the gold standard (i.e., the official diagnosis associated with the ECG), or it was somehow close to it and would have informed an appropriate treatment or management of the case at hand.

  5. 5.

    We present the data in a 2 \(\times \) 2 contingency table and the P value associated with a Fisher’s exact test. The first figure in the pair represents the number of clinicians who discarded the given advice.

  6. 6.

    To be precisely, we collected 249 interpretations in the first part of the study and 62 in the following one.

References

  1. Attia, Z.I., et al.: Application of artificial intelligence to the standard 12 lead ECG to identify people with left ventricular dysfunction. J. Am. Coll. Cardiol. 71(11), A306 (2018)

    Article  Google Scholar 

  2. Bond, R.R., et al.: Automation bias in medicine: the influence of automated diagnoses on interpreter accuracy and uncertainty when reading electrocardiograms. J. Electrocardiol. 51(6), S6–S11 (2018)

    Article  Google Scholar 

  3. Brailer, D.J., Kroch, E., Pauly, M.V.: The impact of computer-assisted test interpretation on physician decision making: the case of electrocardiograms. Med. Decis. Making 17(1), 80–86 (1997)

    Article  Google Scholar 

  4. Brauner, P., et al.: A game-based approach to raise quality awareness in ramp-up processes. Qual. Manag. J. 23(1), 55–69 (2016)

    Article  Google Scholar 

  5. Dohare, A.K., Kumar, V., Kumar, R.: Detection of myocardial infarction in 12 lead ECG using support vector machine. Appl. Soft Comput. 64, 138–147 (2018)

    Article  Google Scholar 

  6. Duijm, L.E., Groenewoud, J.H., Hendriks, J.H., de Koning, H.J.: Independent double reading of screening mammograms in the Netherlands: effect of arbitration following reader disagreements. Radiology 231(2), 564–570 (2004)

    Article  Google Scholar 

  7. Goddard, K., Roudsari, A., Wyatt, J.C.: Automation bias: empirical results assessing influencing factors. Int. J. Med. Inform. 83(5), 368–375 (2014)

    Article  Google Scholar 

  8. Kaba, A., Wishart, I., Fraser, K., Coderre, S., McLaughlin, K.: Are we at risk of groupthink in our approach to teamwork interventions in health care? Med. Educ. 50(4), 400–408 (2016)

    Article  Google Scholar 

  9. Klein, G.: Naturalistic decision making. Hum. Factors 50(3), 456–460 (2008)

    Article  Google Scholar 

  10. Kligfield, P., Gettes, L.S., Bailey, J.J., Childers, R., Deal, B.J., Hancock, E.W., Van Herpen, G., Kors, J.A., Macfarlane, P., Mirvis, D.M., et al.: Recommendations for the standardization and interpretation of the electrocardiogram: part I. J. Am. Coll. Cardiol. 49(10), 1109–1127 (2007)

    Article  Google Scholar 

  11. Mannion, R., Thompson, C.: Systematic biases in group decision-making: implications for patient safety. Int. J. Qual. Health Care 26(6), 606–612 (2014)

    Article  Google Scholar 

  12. Parasuraman, R., Manzey, D.H.: Complacency and bias in human use of automation: an attentional integration. Hum. Factors 52(3), 381–410 (2010)

    Article  Google Scholar 

  13. Rajpurkar, P., Hannun, A.Y., Haghpanahi, M., Bourn, C., Ng, A.Y.: Cardiologistlevel arrhythmia detection with convolutional neural networks. Nat. Med. 25(1), 65–69 (2019). https://doi.org/10.1038/s41591-018-0268-3

    Article  Google Scholar 

  14. Salerno, S.M., Alguire, P.C., Waxman, H.S.: Competency in interpretation of 12-lead electrocardiograms: a summary and appraisal of published evidence. Ann. Intern. Med. 138(9), 751–760 (2003)

    Article  Google Scholar 

  15. Schläpfer, J., Wellens, H.J.: Computer-interpreted electrocardiograms: benefits and limitations. J. Am. Coll. Cardiol. 70(9), 1183–1192 (2017)

    Article  Google Scholar 

  16. Sibbald, M., Davies, E.G., Dorian, P., Eric, H.: Electrocardiographic interpretation skills of cardiology residents: are they competent? Can. J. Cardiol. 30(12), 1721–1724 (2014)

    Article  Google Scholar 

  17. Smith, S.W., et al.: A deep neural network learning algorithm outperforms a conventional algorithm for emergency department electrocardiogram interpretation. J. Electrocardiol. 52, 88–95 (2019)

    Article  Google Scholar 

  18. Street, C.N., Masip, J.: The source of the truth bias: Heuristic processing? Scand. J. Psychol. 56(3), 254–263 (2015)

    Article  Google Scholar 

  19. Strodthoff, N., Strodthoff, C.: Detecting and interpreting myocardial infarction using fully convolutional neural networks. Physiol. Measur. 40(1), 015001 (2019)

    Article  Google Scholar 

  20. Tsai, T.L., Fridsma, D.B., Gatti, G.: Computer decision support as a source of interpretation error: the case of electrocardiograms. J. Am. Med. Inform. Assoc. 10(5), 478–483 (2003)

    Article  Google Scholar 

Download references

Acknowledgments

The author wishes to thank Dr. Raffaele Rasoini and Dr. Camilla Alderighi, cardiologists, for their help in the design and dissemination of the survey, and for the valuable suggestions after reviewing a preliminary draft of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Federico Cabitza .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cabitza, F. (2019). Biases Affecting Human Decision Making in AI-Supported Second Opinion Settings. In: Torra, V., Narukawa, Y., Pasi, G., Viviani, M. (eds) Modeling Decisions for Artificial Intelligence. MDAI 2019. Lecture Notes in Computer Science(), vol 11676. Springer, Cham. https://doi.org/10.1007/978-3-030-26773-5_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-26773-5_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-26772-8

  • Online ISBN: 978-3-030-26773-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics