Design Requirements for a Moral Machine for Autonomous Weapons

  • Ilse VerdiesenEmail author
  • Virginia Dignum
  • Iyad Rahwan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11094)


Autonomous Weapon Systems (AWS) are said to become the third revolution in warfare. These systems raise many questions and concerns that demand in-depth research on ethical and moral responsibility. Ethical decision-making is studied in related fields like Autonomous Vehicles and Human Operated drones, but not yet fully extended to the deployment of AWS and research on moral judgement is lacking. In this paper, we propose design requirements for a Moral Machine (Similar to for Autonomous Weapons to conduct a large-scale study of the moral judgement of people regarding the deployment of this type of weapon. We ran an online survey to get a first impression on the importance of six variables that will be implemented in a proof-of-concept of a Moral Machine for Autonomous Weapons and describe a scenario containing these six variables. The platform will enable large-scale randomized controlled experiments and generate knowledge about people’s feelings concerning this type of weapons. The next steps of our study include development and testing of the design before the prototype is upscaled to a Massive Online Experiment.


Autonomous weapons Ethical decision-making Moral acceptability Moral machine 


  1. 1.
    IEEE Global Initiative, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems (2017)Google Scholar
  2. 2.
    Campaign to Stop Killer Robots. Campaign to Stop Killer Robots (2015). Accessed 15 July 2017
  3. 3.
    ICRC, Ethics and autonomous weapon systems: An ethical basis for human control? International Committee of the Red Cross (ICRC), Geneva, p. 22 (2018)Google Scholar
  4. 4.
    Dignum, V.: Responsible Autonomy (2017). arXiv preprint: arXiv:1706.02513
  5. 5.
    Dignum, V.: Introduction to AI (2016).
  6. 6.
    Roff, H.M.: Weapons autonomy is rocketing (2016).
  7. 7.
    US Air Force, Unconscious US F-16 pilot saved by Auto-pilot 2016, Catch News: YoutubeGoogle Scholar
  8. 8.
    Etzioni, A., Etzioni, O.: Pros and Cons of Autonomous Weapons Systems. Military Review, May–June 2017Google Scholar
  9. 9.
    Arkin, R.C.: The case for ethical autonomy in unmanned systems. J. Mil. Ethics 9(4), 332–341 (2010)CrossRefGoogle Scholar
  10. 10.
    General Assembly United Nations, Joint report of the Special Rapporteur on the rights to freedom of peaceful assembly and of association and the Special Rapporteur on extrajudicial, summary or arbitrary executions on the proper management of assemblies, p. 23 (2016)Google Scholar
  11. 11.
    Kaag, J., Kaufman, W.: Military frameworks: technological know-how and the legitimization of warfare. Camb. Rev. Int. Aff. 22(4), 585–606 (2009)CrossRefGoogle Scholar
  12. 12.
    Rosenberg, M., Markoff, J.: The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own, in The New York Times (2016)Google Scholar
  13. 13.
    Malle, B.F.: Integrating robot ethics and machine morality: the study and design of moral competence in robots. Ethics Inf. Technol. 18, 243–256 (2015)CrossRefGoogle Scholar
  14. 14.
    Cointe, N., Bonnet, G., Boissier, O.: Ethical judgment of agents’ behaviors in multi-agent systems. In: Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems (2016)Google Scholar
  15. 15.
    Bonnefon, J.-F., Shariff, A., Rahwan, I.: The social dilemma of autonomous vehicles. Science 352(6293), 1573–1576 (2016)CrossRefGoogle Scholar
  16. 16.
    Coeckelbergh, M.: Drones, information technology, and distance: mapping the moral epistemology of remote fighting. Ethics Inf. Technol. 15(2), 87–98 (2013)CrossRefGoogle Scholar
  17. 17.
    Strawser, B.J.: Moral predators: the duty to employ uninhabited aerial vehicles. In: Valavanis, K.P., Vachtsevanos, G.J. (eds.) Handbook of Unmanned Aerial Vehicles, pp. 2943–2964. Springer, Dordrecht (2010). Scholar
  18. 18.
    Scalable Cooperation Group. Moral Machine (2016). Accessed 27 Sept 2016
  19. 19.
    Castelfranchi, C., Falcone, R.: From automaticity to autonomy: the frontier of artificial agents. In: Hexmoor, H., Castelfranchi, C., Falcone, R. (eds.) Agent Autonomy, pp. 103–136. Springer, Boston (2010). Scholar
  20. 20.
    Article 36. Killing by machine: Key issues for understanding meaningful human control (2015). Accessed 9 May 2019
  21. 21.
    Open Roboethics initiative. The Ethics and Governance of Lethal Autonomous Weapons Systems: An International Public Opinion Poll, 5 Nov 2015. Accessed 15 July 2017
  22. 22.
    Jackson, C.: Three in Ten Americans Support Using Autonomous Weapons (2017). Accessed 17 June 2018
  23. 23.
    AIV and CAVV, Autonomous weapon systems: the need for meaningful human control, A.C.O.I.O.P.I.L. Advisory Council on International Affairs, Editor, pp. 1–64 (2016)Google Scholar
  24. 24.
    Kuptel, A., Williams, A.: Policy guidance: autonomy in defence systems (2014)Google Scholar
  25. 25.
    UNDIR, Framing Discussions on the Weaponization of Increasingly Autonomous Technologies, pp. 1–14 (2014)Google Scholar
  26. 26.
    Awad, E.: Moral Machine: Perception of Moral Judgment Made by Machines. Massachusetts Institute of Technology, Boston (2017)Google Scholar
  27. 27.
    Reips, U.-D.: Standards for internet-based experimenting. Exp. Psychol. 49(4), 243 (2002)CrossRefGoogle Scholar
  28. 28.
    Oehlert, G.W.: A First Course in Design and Analysis of Experiments. W.H. Freeman, New York (2010)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Delft University of TechnologyDelftThe Netherlands
  2. 2.MIT MedialabCambridgeUSA

Personalised recommendations