The Need for Moral Competency in Autonomous Agent Architectures

  • Matthias ScheutzEmail author
Part of the Synthese Library book series (SYLI, volume 376)


Autonomous robots will have to have the capability to make decisions on their own to varying degrees. In this chapter, I will make the plea for developing moral capabilities deeply integrated into the control architectures of such autonomous agents, for I shall argue that any ordinary decision-making situation from daily life can be turned into a morally charged decision-making situation.


Robot Autonomy Morality Machine ethics 


  1. Alechina, N., Dastani, M., & Logan, B. (2014, forthcoming). Norm approximation for imperfect monitors. In Proceedings of AAMAS, Paris.Google Scholar
  2. Anderson, M., & Anderson, S. L. (2006). MedEthEx: A prototype medical ethics advisor. In Paper Presented at the 18th Conference on Innovative Applications of Artificial Intelligence, Boston.Google Scholar
  3. Arkin, R., & Ulam, P. (2009). An ethical adaptor: Behavioral modification derived from moral emotions. In IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA), 2009, Daejeon (pp. 381–387). IEEE.Google Scholar
  4. Briggs, G., & Scheutz, M. (2012). Investigating the effects of robotic displays of protest and distress. In Proceedings of the 2012 Conference on Social Robotics, Chengdu. LNCS. Springer.Google Scholar
  5. Briggs, G., & Scheutz, M. (2014). How robots can affect human behavior: Investigating the effects of robotic displays of protest and distress. International Journal of Social Robotics, 6, 1–13.CrossRefGoogle Scholar
  6. Briggs, G., Gessell, B., Dunlap, M., & Scheutz, M. (2014). Actions speak louder than looks: Does robot appearance affect human reactions to robot protest and distress? In Proceedings of 23rd IEEE Symposium on Robot and Human Interactive Communication (Ro-Man), Edinburgh.Google Scholar
  7. Bringsjord, S., Arkoudas, K., & Bello, P. (2006). Toward a general logicist methodology for engineering ethically correct robots. IEEE Intelligent Systems, 21(4), 38–44.CrossRefGoogle Scholar
  8. Bringsjord, S., Taylor, J., Housten, T., van Heuveln B, Clark, M., & Wojtowicz, R. (2009). Piagetian roboethics via category theory: Moving beyond mere formal operations to engineer robots whose decisions are guaranteed to be ethically correct. In Proceedings of the ICRA 2009 Workshop on Roboethics, Kobe.Google Scholar
  9. Dworkin, R. (1984). Rights as trumps. In J. Waldron (Ed.), Theories of rights (pp. 153–167). Oxford: Oxford University Press.Google Scholar
  10. Guarini, M. (2011). Computational neural modeling and the philosophy of ethics. In M. Anderson, & S. Anderson (Eds.), Machine ethics (pp. 316–334). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  11. Kramer, J., & Scheutz, M. (2007). Reflection and reasoning mechanisms for failure detection and recovery in a distributed robotic architecture for complex robots. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome (pp. 3699–3704).Google Scholar
  12. Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25(2), 147–186.CrossRefGoogle Scholar
  13. Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21, 18–21.CrossRefGoogle Scholar
  14. Schermerhorn, P., & Scheutz, M. (2009). Dynamic robot autonomy: Investigating the effects of robot decision-making in a human-robot team task. In Proceedings of the 2009 International Conference on Multimodal Interfaces, Cambridge.Google Scholar
  15. Schermerhorn, P., & Scheutz, M. (2011). Disentangling the effects of robot affect, embodiment, and autonomy on human team members in a mixed-initiative task. In ACHI, Gosier (pp. 236–241).Google Scholar
  16. Scheutz, M. (2002). Agents with or without emotions? In R. Weber (Ed.), Proceedings of the 15th International FLAIRS Conference, Pensacola Beach (pp. 89–94). AAAI Press.Google Scholar
  17. Scheutz, M. (2012). The inherent dangers of unidirectional emotional bonds between humans and social robots. In P. Lin, G. Bekey, & K. Abney (Eds.), Anthology on robo-ethics. Cambridge/Mass: MIT Press.Google Scholar
  18. Scheutz, M. (in preparation) Moral action selection and execution.Google Scholar
  19. Scheutz, M., Schermerhorn, P., Kramer, J., & Anderson, D. (2007). First steps toward natural human-like HRI. Autonomous Robots, 22(4), 411–423.CrossRefGoogle Scholar
  20. Scheutz, M., Briggs, G., Cantrell, R., Krause, E., Williams, T., & Veale, R. (2013). Novel mechanisms for natural human-robot interactions in the DIARC architecture. In Proceedings of the AAAI Workshop on Intelligent Robotic Systems, Bellevue.Google Scholar
  21. Strait, M., Briggs, G., & Scheutz, M. (2013). Some correlates of agency ascription and emotional value and their effects on decision-making. In Proceedings of the 5th Biannual Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), Geneva (pp. 505–510).Google Scholar
  22. Talamadupula, K., Benton, J., Kambhampati, S., Schermerhorn, P., & Scheutz, M. (2010). Planning for human-robot teaming in open worlds. ACM Transactions on Intelligent Systems and Technology, 1(2), 14:1–14:24.Google Scholar
  23. Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Department of Computer ScienceTufts UniversityMedfordUSA

Personalised recommendations