Advertisement

Allocation of Moral Decision-Making in Human-Agent Teams: A Pattern Approach

Conference paper
  • 827 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12187)

Abstract

Artificially intelligent agents will deal with more morally sensitive situations as the field of AI progresses. Research efforts are made to regulate, design and build Artificial Moral Agents (AMAs) capable of making moral decisions. This research is highly multidisciplinary with each their own jargon and vision, and so far it is unclear whether a fully autonomous AMA can be achieved. To specify currently available solutions and structure an accessible discussion around them, we propose to apply Team Design Patterns (TDPs). The language of TDPs describe (visually, textually and formally) a dynamic allocation of tasks for moral decision making in a human-agent team context. A task decomposition is proposed on moral decision-making and AMA capabilities to help define such TDPs. Four TDPs are given as examples to illustrate the versatility of the approach. Two problem scenarios (surgical robots and drone surveillance) are used to illustrate these patterns. Finally, we discuss in detail the advantages and disadvantages of a TDP approach to moral decision making.

Keywords

Team Design Patterns Dynamic task allocation Moral decision-making Human-Agent Teaming Machine Ethics Human Factors Meaningful human control 

References

  1. 1.
    Abbink, D.A., et al.: A topology of shared control systems-finding common ground in diversity. IEEE Trans. Hum.-Mach. Syst. 48(5), 509–525 (2018)CrossRefGoogle Scholar
  2. 2.
    Anderson, M., Anderson, S.L.: Machine ethics: creating an ethical intelligent agent. AI Mag. 28(4), 15–15 (2007)Google Scholar
  3. 3.
    Anderson, M., Anderson, S.L.: GenEth: a general ethical dilemma analyzer. Paladyn J. Behav. Robot. 9(1), 337–357 (2018)Google Scholar
  4. 4.
    Arnold, T., Kasenberg, D., Scheutz, M.: Value alignment or misalignment-what will keep systems accountable? In: Workshops at the Thirty-First AAAI Conference on Artificial Intelligence (2017)Google Scholar
  5. 5.
    Beckers, G., et al.: Intelligent autonomous vehicles with an extendable knowledge base and meaningful human control. In: Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies III, vol. 11166, p. 111660C. International Society for Optics and Photonics (2019)Google Scholar
  6. 6.
    Chorus, C.G.: Models of moral decision making: literature review and research agenda for discrete choice analysis. J. Choice Model. 16, 69–85 (2015)CrossRefGoogle Scholar
  7. 7.
    Clarke, R.: The regulation of civilian drones’ impacts on behavioural privacy. Comput. Law Secur. Rev. 30(3), 286–305 (2014)CrossRefGoogle Scholar
  8. 8.
    Conitzer, V., Sinnott-Armstrong, W., Borg, J.S., Deng, Y., Kramer, M.: Moral decision making frameworks for artificial intelligence. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)Google Scholar
  9. 9.
    Dignum, V.: Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-30371-6CrossRefGoogle Scholar
  10. 10.
    Ficuciello, F., Tamburrini, G., Arezzo, A., Villani, L., Siciliano, B.: Autonomy in surgical robots and its meaningful human control. Paladyn J. Behav. Robot. 10(1), 30–43 (2019)CrossRefGoogle Scholar
  11. 11.
    Friedman, B., Hendry, D.G.: Value Sensitive Design: Shaping Technology with Moral Imagination. MIT Press, Cambridge (2019)CrossRefGoogle Scholar
  12. 12.
    Hadfield-Menell, D., Milli, S., Abbeel, P., Russell, S.J., Dragan, A.: Inverse reward design. In: Advances in Neural Information Processing Systems, pp. 6765–6774 (2017)Google Scholar
  13. 13.
    IEEE Global Initiative, et al.: Ethically aligned design: a vision for prioritizing human well-being with autonomous and intelligent systems (2018)Google Scholar
  14. 14.
    Johnson, M., Vera, A.: No AI is an Island: the case for teaming intelligence. AI Mag. 40(1), 16–28 (2019)CrossRefGoogle Scholar
  15. 15.
    Kim, T.W., Donaldson, T., Hooker, J.: Grounding value alignment with ethical principles. arXiv preprint arXiv:1907.05447 (2019)
  16. 16.
    Lerman, K., Jones, C., Galstyan, A., Matarić, M.J.: Analysis of dynamic task allocation in multi-robot systems. Int. J. Robot. Res. 25(3), 225–241 (2006)CrossRefGoogle Scholar
  17. 17.
    Moor, J.H.: The nature, importance, and difficulty of machine ethics. IEEE Intell. Syst. 21(4), 18–21 (2006)CrossRefGoogle Scholar
  18. 18.
    Neerincx, M.A., van Diggelen, J., van Breda, L.: Interaction design patterns for adaptive human-agent-robot teamwork in high-risk domains. In: Harris, D. (ed.) EPCE 2016. LNCS (LNAI), vol. 9736, pp. 211–220. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-40030-3_22CrossRefGoogle Scholar
  19. 19.
    Neerincx, M.A., et al.: Socio-cognitive engineering of a robotic partner for child’s diabetes self-management. Front. Robot. AI 6, 118 (2019).  https://doi.org/10.3389/frobt.2019.00118
  20. 20.
    Noothigattu, R., et al.: A voting-based system for ethical decision making. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)Google Scholar
  21. 21.
    High level expert group on artificial intelligence. Ethics guidelines for trustworthy AI (2019). https://ec.europa.eu/futurium/en/ai-alliance-consultation. Accessed 12 May 2020
  22. 22.
    O’Sullivan, S., et al.: Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 15(1), e1968 (2019)CrossRefGoogle Scholar
  23. 23.
    Rahwan, I., et al.: Machine behaviour. Nature 568(7753), 477–486 (2019)CrossRefGoogle Scholar
  24. 24.
    de Sio, F.S., Van den Hoven, J.: Meaningful human control over autonomous systems: a philosophical account. Front. Robot. AI 5, 15 (2018)CrossRefGoogle Scholar
  25. 25.
    Schulte, A., Donath, D., Lange, D.S.: Design patterns for human-cognitive agent teaming. In: Harris, D. (ed.) EPCE 2016. LNCS (LNAI), vol. 9736, pp. 231–243. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-40030-3_24CrossRefGoogle Scholar
  26. 26.
    Sternberg, R.J.: A model for ethical reasoning. Rev. Gen. Psychol. 16(4), 319–326 (2012)CrossRefGoogle Scholar
  27. 27.
    Thompson, R.M.: Drones in domestic surveillance operations: fourth amendment implications and legislative responses. Congressional Research Service, Library of Congress (2012)Google Scholar
  28. 28.
    Tung, T., Organ, C.H.: Ethics in surgery: historical perspective. Arch. Surg. 135(1), 10–13 (2000)CrossRefGoogle Scholar
  29. 29.
    van Diggelen, J., Johnson, M.: Team design patterns. In: Proceedings of the 7th International Conference on Human-Agent Interaction, pp. 118–126. ACM (2019)Google Scholar
  30. 30.
    van Diggelen, J., Neerincx, M., Peeters, M., Schraagen, J.M.: Developing effective and resilient human-agent teamwork using team design patterns. IEEE Intell. Syst. 34(2), 15–24 (2018)CrossRefGoogle Scholar
  31. 31.
    van Wynsberghe, A., Robbins, S.: Critiquing the reasons for making artificial moral agents. Sci. Eng. Ethics 25(3), 719–735 (2019).  https://doi.org/10.1007/s11948-018-0030-8CrossRefGoogle Scholar
  32. 32.
    Wallach, W., Allen, C., Smit, I.: Machine morality: bottom-up and top-down approaches for modelling human moral faculties. AI Soc. 22(4), 565–582 (2008).  https://doi.org/10.1007/s00146-007-0099-0CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.TNO, Perceptual and Cognitive SystemsSoesterbergThe Netherlands
  2. 2.Interactive Intelligence Group/AiTechDelft University of TechnologyDelftThe Netherlands

Personalised recommendations