Advertisement

Uncovering User Affect Towards AI in Cancer Diagnostics

  • Stephanie Tom TongEmail author
  • Pradeep Sopory
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11582)

Abstract

Despite the rapid application of artificial intelligence (AI) to healthcare, we know comparatively little about how users perceive and evaluate these tools. Following “dual route” theories of information processing from decision science, we propose that because users lack the expertise to rationally understand AI through cognitive evaluation, they rely on their feelings or heuristic route processing to make judgments about AI systems and recommenders. Therefore, affect becomes an important component that influences people’s willingness to adopt AI—and this may be especially true in a context like personal health, where affect is both explicit and heightened. Using the context of remote dermatological skin cancer screening, we examined people’s affective perceptions of an autonomous AI algorithm capable of making recommendations about skin lesions (as either cancerous or benign). In a three-stage study (n = 250), we found that people do hold complex affective responses toward AI diagnostics, even without directly interacting with AI. Findings are relevant to designers of AI systems who might consider how users’ a priori affect may make them more or less resistant to technological adoption. Additionally, the methodological approach validated in this study may be used by other scholars who wish to measure user affect in future research.

Keywords

Affect Healthcare Audience Artificial intelligence 

Notes

Acknowledgements

This work was supported by the National Science Foundation (Award No. NSF 1520723). The authors thank Rachelle Prince for her help with data collection.

References

  1. 1.
    Brewer, A.C., et al.: Mobile applications in dermatology. JAMA Dermatol. 149(11), 1300–1304 (2013).  https://doi.org/10.1001/jamadermatol.2013.5517CrossRefGoogle Scholar
  2. 2.
    Buhrmester, M., Kwang, T., Gosling, S.: Amazon’s mechanical Turk: a new source of inexpensive, yet high-quality, data? Perspect. Psychol. Sci. 6(1), 3–5 (2011)CrossRefGoogle Scholar
  3. 3.
    Casler, K., Bickel, L., Hackett, E.: Separate but equal? A comparison of participants and data gathered via Amazon’s MTurk, social media, and face-to-face behavioral testing. Comput. Hum. Behav. 29, 2156–2160 (2013).  https://doi.org/10.1016/j.chb.2013.05.009CrossRefGoogle Scholar
  4. 4.
    Chang, S.F., Harper, M., Terveen, L.G.: Crowd-based personalized natural language explanations for recommendations. In: Proceedings of the 10th ACM Conference on Recommender Systems, RecSys 2016, pp. 175–182. ACM, New York (2016).  https://doi.org/10.1145/2959100.2959153
  5. 5.
    Crump, M.J., McDonnell, J.V., Gureckis, T.M.: Evaluating Amazon’s Mechanical Turk as a tool for experimental behavioral research. PLoS One 8(3), e57410 (2013)CrossRefGoogle Scholar
  6. 6.
    Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. Hum.-Comput. Stud. 58(6), 697–718 (2003).  https://doi.org/10.1177/1359105317693910CrossRefGoogle Scholar
  7. 7.
    Epstein, R.M., Alper, B.S., Quill, T.: Communicating evidence for participatory decision making. JAMA 19, 2359–2366 (2004).  https://doi.org/10.1001/jama.291.19.2359CrossRefGoogle Scholar
  8. 8.
    Eslami, M., et al.: First i “like” it, then i hide it: folk theories of social feeds. In: Proceedings SIGCHI Conference on Human Factors in Computing Systems (CHI 2016), pp. 2371–2382. ACM, New York (2016).  https://doi.org/10.1145/2858036.2858494
  9. 9.
    Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115 (2017).  https://doi.org/10.1038/nature21056CrossRefGoogle Scholar
  10. 10.
    Ferrer, R.A., Green, P.A., Barret, L.F.: Affective science perspectives on cancer control: Strategically crafting a mutually beneficial research agenda. Perspect. Psychol. Sci. 10(3), 328–345 (2015).  https://doi.org/10.1016/j.copsyc.2015.03.012CrossRefGoogle Scholar
  11. 11.
    Haenssle, H., et al.: Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann. Oncol. (2018).  https://doi.org/10.1093/annonc/mdy166
  12. 12.
    Hauser, D.J., Schwarz, N.: Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behav. Res. Methods 48, 400–407 (2016).  https://doi.org/10.3758/s13428-015-0578zCrossRefGoogle Scholar
  13. 13.
    Johnson, C.Y.: The tech industry thinks it’s about to disrupt health care. Don’t count on it. https://www.washingtonpost.com/news/wonk/wp/2018/02/09/health-care-the-industry-thats-both-begging-for-disruption-and-virtually-disruption-proof/?utm_term=.c7b4312afdd7. Accessed 12 Dec 2018
  14. 14.
    Katz, D., Price, B.A., Holland, S., Dalton, N.S.: Data, data everywhere, and still too hard to link: Insights from user interactions with diabetes apps. In: Proceedings of 2018 CHI Conference on Human Factors in Computing Systems. ACM, New York (2018).  https://doi.org/10.1145/3173574.3174077
  15. 15.
    Kouki, P., Schaffer, J., Pujara, J., O’Donovan, J., Getoor, L.: User preferences for hybrid explanations. In: Proceedings of the 11th ACM Conference on Recommender Systems (RecSys 2017), pp. 84–88. ACM, New York (2017).  https://doi.org/10.1145/3109859.3109915
  16. 16.
    Lee, J., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004).  https://doi.org/10.1518/hfes.46.1.50_30392CrossRefGoogle Scholar
  17. 17.
    Lee, M., Baykal, S.: Algorithmic mediation in group decisions: fairness perceptions of algorithmically mediated vs. discussion-based social division. In: Proceedings of SIGCHI Conference on Computer Supported Cooperative Work (CSCW 2017), pp. 1035–1048. ACM, New York (2017).  https://doi.org/10.1145/2998181.2998230
  18. 18.
    Loewenstein, G., Lerner, J.S.: The role of affect in decision making. In: Davidson, R.J., Scherer, K.R., Goldsmith, H.H. (eds.) Handbook of Affective Science, pp. 619–642. Oxford University Press, Oxford (2003)Google Scholar
  19. 19.
    Merritt, S.M., Ilgen, D.R.: Not all trust is created equal: dispositional and history-based trust in human-automation interactions. Hum. Factors 50(2), 194–210 (2008).  https://doi.org/10.1518/001872008X288574CrossRefGoogle Scholar
  20. 20.
    Parasuraman, A.: Technology readiness index (TRI) a multiple-item scale to measure readiness to embrace new technologies. J. Serv. Res. 2(4), 307–320 (2000).  https://doi.org/10.1177/109467050024001CrossRefGoogle Scholar
  21. 21.
    Roter, D.L., Stewart, M., Putnam, S.M., Lipkin Jr., M., Stiles, W., Inui, T.S.: Communication patterns of primary care physicians. JAMA 277, 350–356 (1997).  https://doi.org/10.1001/jama.1997.03540280088045CrossRefGoogle Scholar
  22. 22.
    Schwarz, N.: Feelings-as-information theory. In: Van Lange, P., Kruglanski, A., Higgins, E.T. (eds.) Handbook of Theories of Social Psychology, pp. 289–308. Sage, Thousand Oaks (2011)Google Scholar
  23. 23.
    Slovic, P., Peters, E., Finucane, M.L., MacGregor, D.G.: Affect, risk, and decision making. Health Psychol. 24(4S), S35–S40 (2005).  https://doi.org/10.1037/0278-6133.24.4.S35CrossRefGoogle Scholar
  24. 24.
    Västfjäll, D., et al.: The arithmetic of emotion: Integration of incidental and integral affect in judgments and decisions. Front. Psychol. 7, 325 (2016).  https://doi.org/10.3389/fpsyg.2016.00325CrossRefGoogle Scholar
  25. 25.
    Ventola, C.L.: Mobile devices and apps for health care professionals: uses and benefits. Pharm. Ther. 39(5), 356–364 (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Wayne State UniversityDetroitUSA

Personalised recommendations