Artificial intelligence assistants and risk: framing a connectivity risk narrative

Abstract

Our social relations are changing, we are now not just talking to each other, but we are now also talking to artificial intelligence (AI) assistants. We claim AI assistants present a new form of digital connectivity risk and a key aspect of this risk phenomenon is related to user risk awareness (or lack of) regarding AI assistant functionality. AI assistants present a significant societal risk phenomenon amplified by the global scale of the products and the increasing use in healthcare, education, business, and service industry. However, there appears to be little research concerning the need to not only understand the changing risks of AI assistant technologies but also how to frame and communicate the risks to users. How can users assess the risks without fully understanding the complexity of the technology? This is a challenging and unwelcome scenario. AI assistant technologies consist of a complex ecosystem and demand explicit and precise communication in terms of communicating and contextualising the new digital risk phenomenon. The paper then argues for the need to examine how to best to explain and support both domestic and commercial user risk awareness regarding AI assistants. To this end, we propose the method of creating a risk narrative which is focused on temporal points of changing societal connectivity and contextualised in terms of risk. We claim the connectivity risk narrative provides an effective medium in capturing, communicating, and contextualising the risks of AI assistants in a medium that can support explainability as a risk mitigation mechanism.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Notes

  1. 1.

    By ad hoc here it means the creation or design of as a solution for a specific context or problem.

  2. 2.

    See www.vi-das.eu/ f25unded under the H2020 MG3.6.

References

  1. Albrecht JP (2016) How the GDPR will change the world. Eur Data Prot L Rev 2:287

    Google Scholar 

  2. Alzahrani H (2016) Artificial intelligence: uses and misuses. Glob J Comput Sci Technol 16(1)

  3. Amazon Press Release (2017) http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=2324045. Accessed June 2018

  4. Amazon.com Help: Alexa Terms of Use (2019) https://www.amazon.com/gp/help/customer/display.html?nodeId=201809740. Accessed July 2019

  5. Andrejevic M, Gates K (2014) Big data surveillance: introduction. Surveill Soc 12(2):185–196

    Google Scholar 

  6. Awad N, Krishnan M (2006) The personalization privacy paradox: an empirical evaluation of information transparency and the willingness to be profiled online for personalization. MIS Q 30(1):13–28. https://doi.org/10.2307/25148715

    Article  Google Scholar 

  7. Bellet T, Cunneen M, Mullins M, Murphy F, Pütz F, Spickermann F, Braendle C, Baumann MF (2019) From semi to fully autonomous vehicles: new emerging risks and ethico-legal challenges for human-machine interactions. Transp Res Part F Traffic Psychol Behav 63:153–164

    Google Scholar 

  8. Barth S, De Jong MD (2017) The privacy paradox–investigating discrepancies between expressed privacy concerns and actual online behavior—a systematic literature review. Telemat Inform 34(7):1038–1058

    Google Scholar 

  9. Barth A, Datta A, Mitchell JC, Nissenbaum H (2006) Privacy and contextual integrity: framework and applications. In: 2006 IEEE symposium on security and privacy (S&P'06). IEEE, p 15

  10. Bates DW, Saria S, Ohno-Machado L, Shah A, Escobar GJ (2014) Big data in health care: using analytics to identify and manage high-risk and high-cost patients. Health Aff 33(7):1123–1131

    Google Scholar 

  11. Berson IR, Ferron JM, Berson MJ (2002) Emerging risks of violence in the digital age. J Sch Violence 1(2):51–71

    Google Scholar 

  12. Bologa R, Bologa R, Florea A (2013) Big data and specific analysis methods for insurance fraud detection. Database Syst J 4(4):30–39

    Google Scholar 

  13. Bottis MC, Bouchagiar G (2018) Personal data v. Big data: challenges of commodification of personal data. Open J Philos 8(3):206–215

    Google Scholar 

  14. Canbek NG, Mutlu ME (2016) On the track of artificial intelligence: learning with intelligent personal assistants. J New Results Sci 13(1):592–601

    Google Scholar 

  15. Cate FH (2014) The big data debate. Science 346(6211):818

    Google Scholar 

  16. Chung H, Park J, Lee S (2017) Digital forensic approaches for Amazon Alexa ecosystem. Digital investigation, vol 22. https://sciencedirect.com/science/article/pii/s1742287617301974. Retrieved 22 Aug 2019

  17. Crandall J, Song P (2013) A pointillism approach for natural language processing of social media. arXiv (Information Retrieval)

  18. Cunneen M, Mullins M, Murphy F, Shannon D, Furxhi I, Ryan C (2019) Autonomous vehicles and avoiding the trolley (dilemma): vehicle perception, classification, and the challenges of framing decision ethics. Cybern Syst 1–22

  19. Dale R (2015) The limits of intelligent personal assistants. Nat Lang Eng 21(2):325–329

    Google Scholar 

  20. Dale R (2017) The pros and cons of listening devices. Nat Lang Eng 23(6):969–973

    Google Scholar 

  21. de Wit JB, Das E, Vet R (2008) What works best: objective statistics or a personal testimonial? An assessment of the persuasive effects of different types of message evidence on risk perception. Health Psychol 27(1):110–115

    Google Scholar 

  22. Dhar V (2016) Equity, safety, and privacy in the autonomous vehicle era. IEEE Comput 49(11):80–83

    Google Scholar 

  23. Downs JS (2014) Prescriptive scientific narratives for communicating usable science. Proc Natl Acad Sci USA 111:13627–13633

    Google Scholar 

  24. Doyle T (2011) Helen Nissenbaum, privacy in context: technology, policy, and the integrity of social life. J Value Inq 45(1):97–102

    MathSciNet  Google Scholar 

  25. Floridi L (2019) Translating principles into practices of digital ethics: five risks of being unethical. Philos Technol 1–9

  26. Fumero A, Marrero RJ, Voltes D, Penate W (2018) Personal and social factors involved in internet addiction among adolescents: a meta-analysis. Comput Human Behav 86:387–400

    Google Scholar 

  27. Golding D, Krimsky S, Plough A (1992) Evaluating risk communication: narrative vs. Technical presentations of information about radon. Risk Anal 12(1):27–35

    Google Scholar 

  28. Gray S (2016) Always on: privacy implications of microphone-enabled devices. In: Future of privacy forum

  29. Gunkel D (2014) Social contract 2.0: terms of service agreements and political theory. J Media Crit 1:145–168

    Google Scholar 

  30. Guzman A (2017) Making AI safe for humans: a conversation with Siri. Routledge, London, pp 69–85

    Google Scholar 

  31. Hasebrink U, Goerzig A, Haddon L, Kalmus V, Livingstone S (2011) Patterns of risk and safety online: in-depth analyses from the EU Kids Online survey. https://core.ac.uk/download/pdf/221597.pdf. Accessed Apr 2019

  32. Helbing D, Frey BS, Gigerenzer G, Hafen E, Hagner M, Hofstetter Y, Zwitter A (2019) Will democracy survive big data and artificial intelligence? Towards digital enlightenment. Springer, Cham, pp 73–98

    Google Scholar 

  33. Henwood K, Pidgeon N, Parkhill K, Simmons P (2011) Researching risk: Narrative, biography, subjectivity. Hist Soc Res/Historische Sozialforschung 36(4):251–272

  34. Heyvaert M, Maes B, Onghena P (2013) Mixed methods research synthesis: definition, framework, and potential. Qual Quant 47(2):659–676

    Google Scholar 

  35. Hildebrandt M (2013) Slaves to big data. Or are we? Rev Internet Derecho Política 17:7–44

    Google Scholar 

  36. Hildebrandt M (2015) Smart technologies and the end (s) of law: novel entanglements of law and technology. Edward Elgar Publishing, London

    Google Scholar 

  37. Hildebrandt M, O’Hara K, Waidner M (2013) The value of personal data. Digital enlightenment yearbook 2013. IOS Press, Amsterdam

    Google Scholar 

  38. Janeček V (2018) Ownership of personal data in the internet of things (December 1, 2017). Comput Law Secur Rev 34(5):1039–1052. https://doi.org/10.2139/ssrn.3111047

  39. Kshetri N, Voas J (2018) Cyberthreats under the Bed. Computer 51(5):92–95

    Google Scholar 

  40. Lopatovska I, Rink K, Knight I, Raines K, Cosenza K, Williams H, Sorsche P, Hirsch D, Li Q, Martinez A (2018) Talk to me: exploring user interactions with the Amazon Alexa. J Librariansh Inf Sci 96100061875941

  41. Lupton D (2016) Digital risk society. In: Burgess A, Alemanno A, Zinn J (eds) The Routledge hand-book of risk studies. Routledge, London, pp 301–309

    Google Scholar 

  42. Mairal G (2008) Narratives of risk. J Risk Res 11(1):41–54

    MathSciNet  Google Scholar 

  43. Marchant GE, Allenby BR, Herkert JR (2011) The growing gap between emerging technologies and legal-ethical oversight: the pacing problem. In: The international library of ethics, law and technology

  44. Martin A (2008) Digital literacy and the digital society. Digit Literacies Concepts Policies Pract 30:151–176

    Google Scholar 

  45. Matzner T (2014) Why privacy is not enough privacy in the context of “ubiquitous computing” and “big data”. J Inf Commun Ethics Soc 12(2):93–106

    Google Scholar 

  46. McLean G, Osei-Frimpong K (2019) Hey Alexa… examine the variables influencing the use of artificial intelligent in-home voice assistants. Comput Hum Behav 99:28–37

    Google Scholar 

  47. Middleton CA (2007) Illusions of balance and control in an always-on environment: a case study of BlackBerry users. Continuum 21(2):165–178

    Google Scholar 

  48. Mitchell MC, Egudo M (2003) A review of narrative methodology (no. DSTO-GD-0385). Def Sci Technol Organ Edinb (Australia) Land Oper Div

  49. Mote K (2012) Natural language processing - a survey. Computation and language. arXiv:1209.6238

  50. Nadkarni PM, Ohno-Machado L, Chapman WW (2011) Natural language processing: an introduction. J Am Med Inform Assoc 18(5):544–551

    Google Scholar 

  51. Nissenbaum H (2004) Privacy as contextual integrity. Wash Law Rev 79(1):119–157

    Google Scholar 

  52. Nissenbaum H (2017) Deregulating collection: must privacy give way to use regulation? Soc Sci Res Netw

  53. Otway H, Thomas K (1982) Reflections on risk perception and policy 1,2. Risk Anal 2(2):69–82

    Google Scholar 

  54. Paefgen J, Staake T, Thiesse F (2013) Evaluation and aggregation of pay-as-you-drive insurance rate factors. Decis Support Syst 56:192–201

    Google Scholar 

  55. Papacharissi Z (2010) Privacy as a luxury commodity. First Monday 15(8):2

    Google Scholar 

  56. Parthasarathy S (2004) Regulating risk: defining genetic privacy in the United States and Britain. Sci Technol Hum Values 29(3):332–352

    Google Scholar 

  57. Pierson J, Heyman R (2011) Social media and cookies: challenges for online privacy. Info 13(6):30–42

    Google Scholar 

  58. Preece A (2018) Asking ‘Why’ in AI: explainability of intelligent systems—perspectives and challenges. Intell Syst Account Financ Manag 25:63–72. https://doi.org/10.1002/isaf.1422

    Article  Google Scholar 

  59. Rosen J (2012) The right to be forgotten. Stanford Law Review. Available from http://www.stanfordlawreview.org/online/privacy-paradox/right-to-be-forgotten. Accessed 14 Nov 2018

  60. Sciutti A, Mara A, Tagliasco V, Sandini G (2018) Humanizing human–robot interaction: on the importance of mutual understanding. IEEE Technol Soc Mag 37(1):22–29

    Google Scholar 

  61. Turkle S (2006) Always-on/Always-on-you: the tethered self. Handbook of mobile communication studies, 121

  62. Turkle S (2010) In good company? On the threshold of robotic companions. Close engagements with artificial companions: key. John Benjamins, Amsterdam

    Google Scholar 

  63. Turkle S (2011) The tethered self: technology reinvents intimacy and solitude. Contin High Educ Rev 75:29

    Google Scholar 

  64. Van Loon J (2003) Risk and technological culture: towards a sociology of virulence

  65. Venkatadri G, Andreou A, Liu Y, Mislove A, Gummadi KP, Loiseau P, Goga O (2018) Privacy risks with facebook’s PII-based targeting: auditing a data broker’s advertising interface. In: 2018 IEEE symposium on security and privacy (SP). IEEE, pp 89–107

  66. Weizenbaum J (1966) ELIZA—a computer program for the study of natural language communication between man and machine. Commun ACM 9(1):36–45

    Google Scholar 

  67. Zuboff S (1988) Dilemmas of transformation in the age of the smart machine. PUB TYPE 81

  68. Zuboff S (1996) The emperor’s new information economy. In: Information technology and changes in organizational work. Springer, Boston, MA, pp 13–17

    Google Scholar 

  69. Zuboff S (2019) Surveillance capitalism and the challenge of collective action. In: New labor forum, vol 28, No. 1. SAGE Publications, Sage, Los Angeles, CA, pp 10–29

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Martin Cunneen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cunneen, M., Mullins, M. & Murphy, F. Artificial intelligence assistants and risk: framing a connectivity risk narrative. AI & Soc 35, 625–634 (2020). https://doi.org/10.1007/s00146-019-00916-9

Download citation

Keywords

  • Artificial intelligence assistants
  • Risk
  • Connectivity
  • Narratology
  • Risk communication
  • Risk perception
  • Explainability
  • Informed consent
  • Data commodification
  • Data monetisation