AI in the Sky: How People Morally Evaluate Human and Machine Decisions in a Lethal Strike Dilemma

  • Bertram F. MalleEmail author
  • Stuti Thapa Magar
  • Matthias Scheutz
Part of the Intelligent Systems, Control and Automation: Science and Engineering book series (ISCA, volume 95)


Even though morally competent artificial agents have yet to emerge in society, we need insights from empirical science into how people will respond to such agents and how these responses should inform agent design. Three survey studies presented participants with an artificial intelligence (AI) agent, an autonomous drone, or a human drone pilot facing a moral dilemma in a military context: to either launch a missile strike on a terrorist compound but risk the life of a child, or to cancel the strike to protect the child but risk a terrorist attack. Seventy-two percent of respondents were comfortable making moral judgments about the AI in this scenario and fifty-one percent were comfortable making moral judgments about the autonomous drone. These participants applied the same norms to the two artificial agents and the human drone pilot (more than 80% said that the agent should launch the missile). However, people ascribed different patterns of blame to humans and machines as a function of the agent’s decision of how to solve the dilemma. These differences in blame seem to stem from different assumptions about the agents’ embeddedness in social structures and the moral justifications those structures afford. Specifically, people less readily see artificial agents as embedded in social structures and, as a result, they explained and justified their actions differently. As artificial agents will (and already do) perform many actions with moral significance, we must heed such differences in justifications and blame and probe how they affect our interactions with those agents.


Human-robot interaction Moral dilemma Social robots Moral agency Military command chain 



This project was supported in part by grants from the Office of Naval Research, N00014-13-1-0269 and N00014-16-1-2278. The opinions expressed here are our own and do not necessarily reflect the views of ONR. We are grateful to Hanne Watkins for her insightful comments on an earlier draft of the manuscript.


  1. 1.
    Arkin R (2009) Governing lethal behavior in autonomous robots. CRC Press, Boca Raton, FLCrossRefGoogle Scholar
  2. 2.
    Arkin R (2010) The case for ethical autonomy in unmanned systems. J Mil Ethics 9:332–341. Scholar
  3. 3.
    Asaro P (2012) A body to kick, but still no soul to Damn: Legal perspectives on robotics. In: Lin P, Abney K, Bekey G (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, pp 169–186Google Scholar
  4. 4.
    Biernat M, Manis M, Nelson T (1991) Stereotypes and standards of judgment. J Pers Soc Psychol 60:485–499CrossRefGoogle Scholar
  5. 5.
    Bonnefon J, Shariff A, Rahwan I (2016) The social dilemma of autonomous vehicles. Science 352:1573–1576. Scholar
  6. 6.
    Bowen P (2016) The kill chain. Retrieved from Accessed on 30 June 2017
  7. 7.
    Briggs G, Scheutz M (2014) How robots can affect human behavior: Investigating the effects of robotic displays of protest and distress. Int J Soc Robot 6:1–13CrossRefGoogle Scholar
  8. 8.
    Briggs G, Scheutz M (2017) The case for robot disobedience. Sci Am 316:44–47. Scholar
  9. 9.
    Cooke N (2015) Team cognition as interaction. Curr Dir Psychol Sci 24:415–419. Scholar
  10. 10.
    Funk M, Irrgang B, Leuteritz S (2016) Enhanced information warfare and three moral claims of combat drone responsibility. In: Nucci E, de Sio F (eds) Drones and responsibility: legal, philosophical and socio-technical perspectives on remotely controlled weapons. Routledge, London, UK, pp 182–196CrossRefGoogle Scholar
  11. 11.
    Gibson D, Schroeder S (2003) Who ought to be blamed? The effect of organizational roles on blame and credit attributions. Int J Conflict Manage 14:95–117. Scholar
  12. 12.
    Hage J (2017) Theoretical foundations for the responsibility of autonomous agents. Artif Intell Law 25:255–271. Scholar
  13. 13.
    Hamilton V, Sanders J (1981) The effect of roles and deeds on responsibility judgments: the normative structure of wrongdoing. Soc Psychol Q 44:237–254. Scholar
  14. 14.
    Harbers M, Peeters M, Neerincx M (2017) Perceived autonomy of robots: effects of appearance and context. In: A world with robots, intelligent systems, control and automation: science and engineering. Springer, Cham, pp 19–33.
  15. 15.
    Harriott C, Adams J (2013) Modeling human performance for human-robot systems. Rev Hum Fact Ergonomics 9:94–130. Scholar
  16. 16.
    Hood G (2016) Eye in the sky. Bleecker Street Media, New York, NYGoogle Scholar
  17. 17.
    ICRC (2018) Customary IHL. IHL Database, Customary IHL. Retrieved from Accessed on 30 May 2018
  18. 18.
    Kahn Jr P, Kanda T, Ishiguro H, Gill B, Ruckert J, Shen S, Gary H, et al (2012) Do people hold a humanoid robot morally accountable for the harm it causes? In: Proceedings of the seventh annual ACM/IEEE international conference on human-robot interaction. ACM, New York, NY, pp 33–40.
  19. 19.
    Li J, Zhao X, Cho M, Ju W, Malle B (2016) From trolley to autonomous vehicle: perceptions of responsibility and moral norms in traffic accidents with self-driving cars. Technical report, Society of Automotive Engineers (SAE), Technical Paper 2016-01-0164.
  20. 20.
    Lin P (2013) The ethics of autonomous cars. Retrieved Octobr 8, from Accessed on 30 Sept 2014
  21. 21.
    Malle B (2016) Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics Inf Technol 18:243–256. Scholar
  22. 22.
    Malle B, Scheutz M (2016) Inevitable psychological mechanisms triggered by robot appearance: morality included? Technical report, 2016 AAAI Spring Symposium Series Technical Reports SS-16-03Google Scholar
  23. 23.
    Malle B, Guglielmo S, Monroe A (2014) A theory of blame. Psychol Inquiry 25:147–186. Scholar
  24. 24.
    Malle B, Scheutz M, Arnold T, Cusimano VCJ (2015) Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In: Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction, HRI’15. ACM, New York, NY, pp 117–124Google Scholar
  25. 25.
    Malle B, Scheutz M, Forlizzi J, Voiklis J (2016) Which robot am I thinking about? The impact of action and appearance on people’s evaluations of a moral robot. In: Proceedings of the eleventh annual meeting of the IEEE conference on human-robot interaction, HRI’16. IEEE Press, Piscataway, NJ, pp 125–132Google Scholar
  26. 26.
    Melendez S (2017) The rise of the robots: what the future holds for the world’s armies. Retrieved June 12, from Accessed on 5 June 2018
  27. 27.
    MHAT-IV (2006) Mental Health Advisory Team (MHAT) IV: Operation Iraqi Freedom 05-07 Final report. Technical report, Office of the Surgeon, Multinational Force-Iraq; Office of the Surgeon General, United States Army Medical Command, Washington, DCGoogle Scholar
  28. 28.
    Midden C, Ham J (2012) The illusion of agency: the influence of the agency of an artificial agent on its persuasive power. In: Persuasive technology, design for health and safety. Springer, pp 90–99Google Scholar
  29. 29.
    Millar J (2014) An ethical dilemma: when robot cars must kill, who should pick the victim?—Robohub. June. Retrieved September 28, 2014 from
  30. 30.
    Monroe A, Malle B (2017) Two paths to blame: intentionality directs moral information processing along two distinct tracks. J Exp Psychol: Gen 146:123–133. Scholar
  31. 31.
    Monroe A, Dillon K, Malle B (2014) Bringing free will down to earth: people’s psychological concept of free will and its role in moral judgment. Conscious Cogn 27:100–108. Scholar
  32. 32.
    Pagallo U (2011) Robots of just war: a legal perspective. Philos Technol 24:307–323. Scholar
  33. 33.
    Pellerin C (2015) Work: human-machine teaming represents defense technology future. Technical report, U.S. Department of Defense, November. Retrieved June 30, 2017, from
  34. 34.
    Podschwadek F (2017) Do androids dream of normative endorsement? On the fallibility of artificial moral agents. Artif Intell Law 25:325–339. Scholar
  35. 35.
    Ray J, Atha K, Francis E, Dependahl C, Mulvenon J, Alderman D, Ragland-Luce L (2016) China’s industrial and military robotics development: research report prepared on behalf of the U.S.–China Economic and Security Review Commission. Technical report, Center for Intelligence Research and AnalysisGoogle Scholar
  36. 36.
    Scheutz M, Malle B (2014) ‘Think and do the right thing’: a plea for morally competent autonomous robots. In: Proceedings of the IEEE international symposium on ethics in engineering, science, and technology, Ethics’2014. Curran Associates/IEEE Computer Society, Red Hook, NY, pp 36–39Google Scholar
  37. 37.
    Shank D, DeSanti A (2018) Attributions of morality and mind to artificial intelligence after real-world moral violations. Comput Hum Behav 86:401–411.
  38. 38.
    Sparrow R (2007) Killer robots. J Appl Philos 24:62–77. Scholar
  39. 39.
    Stahl B (2006) Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics Inf Technol 8:205–213. Scholar
  40. 40.
    Strait M, Canning C, Scheutz M (2014) Let me tell you! Investigating the effects of robot communication strategies in advice-giving situations based on robot appearance, interaction modality, and distance. In: Proceedings of 9th ACM/IEEE international conference on human-robot interaction. pp 479–486Google Scholar
  41. 41.
    Strawser B (2010) Moral predators: the duty to employ uninhabited aerial vehicles. J Mil Ethics 9:342–368. Scholar
  42. 42.
    Voiklis J, Malle B (2017) Moral cognition and its basis in social cognition and social regulation. In: Gray K, Graham J (eds) Atlas of moral psychology, Guilford Press, New York, NYGoogle Scholar
  43. 43.
    Voiklis J, Kim B, Cusimano C, Malle B (2016) Moral judgments of human versus robot agents. In: Proceedings of the 25th IEEE international symposium on robot and human interactive communication (RO-MAN), pp 486–491Google Scholar
  44. 44.
    Wallach W, Allen C (2008) Moral machines: teaching robots right from wrongGoogle Scholar
  45. 45.
    Webb W (2018) The U.S. military will have more robots than humans by 2025. February 20. Monthly review: MR Online. Retrieved June 5, 2018, from

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Bertram F. Malle
    • 1
    Email author
  • Stuti Thapa Magar
    • 2
  • Matthias Scheutz
    • 3
  1. 1.Department of Cognitive, Linguistic and Psychological SciencesBrown UniversityProvidenceUSA
  2. 2.Department of Psychological SciencesPurdue UniversityWest LafayetteUSA
  3. 3.Department of Computer ScienceTufts University, Halligan HallMedfordUSA

Personalised recommendations