Skip to main content
Log in

Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode these conditions and generate a flowchart that we call the Moral Responsibility Test. This test can be used as a tool both to evaluate whether an entity is a morally responsible agent and to inform human moral decision-making over the influencing variables of the context of action. We apply the test to the case of Artificial Moral Advisors (AMAs) and conclude that this form of AI cannot qualify as morally responsible agents. We further discuss the implications for the use of AMAs as moral enhancement and show that using AMAs to offload human responsibility is inadequate. We argue instead that AMAs could morally enhance users if they are interpreted as enablers for moral knowledge of the contextual variables surrounding human moral decision-making, with the implication that such a use might actually enlarge human moral responsibility.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data Availability

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Code Availability

The code included in the article is based on the code2flow syntax available here: https://code2flow.com/.

Notes

  1. Throughout the article, we follow Giubilini and Savulescu (2018) in using the acronym AMAs to refer to Artificial Moral Advisors. We use AAMAs as an acronym to refer to the broader class of Autonomous Artificial Moral Agents. Note, however, that the research literature also uses AMAs as an acronym to refer to Artificial Moral Agents, which is roughly the equivalent to our use of AAMAs.

  2. Other scholars (Broadie, 1991) endorse a view where only the voluntariness requirement is necessary for moral responsibility. However, voluntariness is taken to include deliberation (Bostock, 2000). A possible explanation for differences in interpretation is that Aristotle speaks extensively about the pair voluntary-involuntary, while introducing deliberate decision later in the NE (Glover, 1970).

  3. Actions performed “because of ignorance” may be either done “in ignorance” or “by ignorance.” The former is culpable ignorance (1110b, 25), e.g., when agents act while being drunk, which is the result of their own negligence (vice) and the latter excusable ignorance (1111a, 20), e.g., when agents cannot reasonably foresee the consequences of their actions, because they lack contextual knowledge.

  4. Such actions are considered “mixed actions” by Aristotle: in a sense, voluntary, because the agent performs the action themselves, the principle of action is in the agent; in another sense, involuntary, because the agent acts while coerced in a context that they cannot control — the purpose of the action is externally determined (Constantinescu, 2013; Mureșan, 2007). Such mixed actions are, in general, voluntary, but, in particular, not voluntary (1110a, 15). There is a mixed will of the agent (Bostock, 2000), in that they both want to perform the action (as the best alternative in the given circumstances) and do not want to perform it (as this is not their choice, had they been able to make an option in normal circumstances).

  5. See Constantinescu (2013) for an initial restatement of the four conditions.

  6. Note, however, that condition (1) imposes some more general requirements that are context-independent — the entity is able to initiate a causal action, while condition (2) requires some more relative, context-dependent demands — the entity is not coerced in the specific context of action.

  7. The app can be accessed here: https://code2flow.com/.

  8. We thank one anonymous reviewer for suggesting this question.

  9. We thank one anonymous reviewer for raising this point.

  10. We thank one anonymous reviewer for highlighting these questions.

  11. However, this goes against the position defended by List (2021), who argues that the AI systems that require little to no input from us while in use should be considered “new loci of agency,” as they would exhibit a high degree of autonomy. Evolutionary computing, he adds, could even get humans out of the picture of AI moral responsibility completely. Our discussion of the four conditions of the Moral Responsibility Test gives us strong reasons to remain sceptical of List’s position and to argue that even such potential Autonomous Artificial Moral Agents would fail it.

  12. See, for instance, the way apps dedicated to co-parenting (e.g., OurFamilyWizard, coParenter, TalkingParents) currently work (Coldwell, 2021): by using sentiment analysis, the apps flag what is detected as “emotionally charged” phrases in written conversations between separated parents, offering the person who writes the extra-time for reflecting whether they still want to send the message, acknowledging the risk that the second parent might interpret the phrase as aggressive or humiliating, for instance. Such co-parenting apps are already recommended by lawyers in the USA as standard practice for separated parents, because of the “chilling effect” on the communication between them. AMAs could also prove to be especially useful as part of the ethical infrastructures of companies in order to enable managers better address a wide variety of moral dilemmas in the workplace (Uszkai et al., 2021).

References

  • Aristotle. (2018). Nicomachean ethics. Second edition (trans and ed: Crisp, R.). Cambridge University Press.

  • Bernáth, L. (2021). Can autonomous agents without phenomenal consciousness be morally responsible? Philosophy & Technology. https://doi.org/10.1007/s13347-021-00462-7

  • Bostock, D. (2000). Aristotle’s ethics. Oxford University Press.

    Google Scholar 

  • Broadie, S. (1991). Ethics with Aristotle. Oxford University Press.

    Google Scholar 

  • Browne, T. K., & Clarke, S. (2020). Bioconservatism, bioenhancement and backfiring. Journal of Moral Education, 49, 241–256.

    Article  Google Scholar 

  • Burr, C., Taddeo, M., & Floridi, L. (2020). The ethics of digital well-being: A thematic review. Science and Engineering Ethics, 26, 2313–2343.

    Article  Google Scholar 

  • Cave, S., Nyrup, R., Vold, K., & Weller, A. (2018). Motivations and risks of machine ethics. Proceedings of the IEEE, 107, 562–574.

    Article  Google Scholar 

  • Clarke, R. (1992). Free will and the conditions of moral responsibility. Philosophical Studies, 66, 53–72.

    Article  Google Scholar 

  • Coeckelbergh, M. (2009). Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. AI & Society, 24, 181–189.

    Article  Google Scholar 

  • Coeckelbergh, M. (2020). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26, 2051–2068.

    Article  Google Scholar 

  • Coldwell, W. (2021). What happens when an AI knows how you feel? Technology used to only deliver our messages. Now it wants to write them for us by understanding our emotions. In Wired. Accessed on 10 Jan 2022 at https://www.wired.com/story/artificial-emotional-intelligence/

  • Constantinescu, M. (2013). Attributions of moral responsibility: from Aristotle to corporations. Annals of the University of Bucharest - Philosophy Series, 62, 19–37.

    Google Scholar 

  • Constantinescu, M., & Kaptein, M. (2015). Mutually enhancing responsibility: A theoretical exploration of the interaction mechanisms between individual and corporate moral responsibility. Journal of Business Ethics, 129, 325–339.

    Article  Google Scholar 

  • Constantinescu, M., Voinea, C., Uszkai, R., & Vică, C. (2021). Understanding responsibility in responsible AI. Dianoetic virtues and the hard problem of context. Ethics and Information Technology. https://doi.org/10.1007/s10676-021-09616-9

  • Corlett, J. A. (2009). Responsibility and punishment (3rd ed.). Springer.

    Google Scholar 

  • Danaher, J. (2018). Towards an ethics of AI assistants: An initial framework. Philosophy & Technology, 31, 629–653.

    Article  Google Scholar 

  • DeGeorge, R. T. (1999). Business ethics. Prentice Hall.

    Google Scholar 

  • Dennett, D. C. (1997). Consciousness in human and robot minds. Oxford University Press.

    Google Scholar 

  • Eshleman, A. (2019). Moral responsibility. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. Accessed on 30 Jan 2021 at https://plato.stanford.edu/archives/fall2019/entries/moral-responsibility/

  • Firth, R. (1952). Ethical absolutism and the ideal observer. Philosophy and Phenomenological Research, 12, 317–345.

    Article  Google Scholar 

  • Fischer, J. M. (2006). My way: Essays on moral responsibility. Oxford University Press.

    Google Scholar 

  • Fischer, J. M., & Ravizza, M. (1993). Perspectives on moral responsibility. Cornell University Press.

    Google Scholar 

  • Floridi, L. (2014). The 4th revolution. How the infosphere is reshaping human reality. Oxford University Press.

    Google Scholar 

  • Frankfurt, H. (1969). Alternate possibilities and moral responsibility. Journal of Philosophy, 66, 829–839.

    Article  Google Scholar 

  • Gaita, R. (1989). The personal in ethics. In D. Z. Phillips & P. Winch (Eds.), Wittgenstein: Attention to particulars (pp. 124–150). MacMillan.

    Chapter  Google Scholar 

  • Gamez, P., Shank, D. B., Arnold, C., & North, M. (2020). Artificial virtue: The machine question and perceptions of moral character in artificial moral agents. AI & Society, 35, 795–809.

    Article  Google Scholar 

  • Giubilini, A., & Savulescu, J. (2018). The artificial moral advisor. The “ideal observer” meets artificial intelligence. Philosophy & Technology, 31, 169–188.

    Article  Google Scholar 

  • Glover, J. (1970). Responsibility. Routledge & Kegan Paul.

    Google Scholar 

  • Green, B. P. (2018). Ethical reflections on artificial intelligence. Scientia et Fides, 6, 9–31.

    Article  Google Scholar 

  • Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Pantheon/Random House.

    Google Scholar 

  • Hakli, R., & Mäkelä, P. (2019). Moral responsibility of robots and hybrid agents. The Monist, 102, 259–275.

    Article  Google Scholar 

  • Herzog, C. (2021). Three risks that caution against a premature implementation of artificial moral agents for practical and economical use. Science and Engineering Ethics, 27. https://doi.org/10.1007/s11948-021-00283-z

  • Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11, 19–29.

    Article  Google Scholar 

  • Howard, D., & Muntean, I. (2017). Artificial moral cognition: moral functionalism and autonomous moral agency. In T. M. Powers (Ed.), Philosophy and Computing (pp. 121–159). Springer.

    Chapter  Google Scholar 

  • Hughes, G. J. (2001). Aristotle. Routledge.

    Google Scholar 

  • Irwin, T. (1999). Introduction. In Aristotle, Nicomachean Ethics (trans. and ed. T. Irwin), second edition (pp. xiii-xxviii). Hackett Publishing Company, Inc.

  • Jauernig, J., Uhl, M., & Walkowitz, G. (2022). People prefer moral discretion to algorithms: Algorithm aversion beyond intransparency. Philosophy & Technology, 35. https://doi.org/10.1007/s13347-021-00495-y

  • Johnson, M. (2014). Morality for humans. Ethical understanding from the perspective of cognitive science. The University of Chicago Press.

    Book  Google Scholar 

  • Knobe, J., & Doris, J. (2010). Responsibility. In J. Doris et al. (Eds.), The handbook of moral psychology. Oxford University Press.

    Google Scholar 

  • Köbis, N., Bonnefon, J.-F., & Rahwan, I. (2021). Bad machines corrupt good morals. In TSE Working Papers, 21–1212.

  • Lara, F., & Deckers, J. (2020). Artificial intelligence as a socratic assistant for moral enhancement. Neuroethics, 13, 279–287.

    Article  Google Scholar 

  • Levy, N. (2005). The good, the bad, and the blameworthy. Journal of Ethics and Social Philosophy, 2, 2–16.

    Google Scholar 

  • List, C. (2021). Group agency and artificial intelligence. Philosophy & Technology, 34, 1213–1242.

    Article  Google Scholar 

  • Loh, F., & Loh, J. (2017). Autonomy and responsibility in hybrid systems. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0: From autonomous cars to artificial intelligence (pp. 35–50). Oxford University Press.

    Google Scholar 

  • Mabaso, B. A. (2020). Artificial moral agents within an ethos of AI4SG. Philosophy & Technology. https://doi.org/10.1007/s13347-020-00400-z

  • Mathiesen, K. (2006). We’re all in this together: Responsibility of collective agents and their members. Midwest Studies in Philosophy, 30, 240–255.

    Article  Google Scholar 

  • Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175–183.

    Article  Google Scholar 

  • Meyer, S. S. (2011). Aristotle on moral responsibility: Character and cause (Second ed.). Oxford University Press.

  • Mureșan, V. (2007). Comentariu la Etica Nicomahică. Second edition, revised. Humanitas.

  • Neri, E., Coppola, F., Miele, V., et al. (2020). Artificial intelligence: Who is responsible for the diagnosis? La Radiologia Medica, 125, 517–521.

    Article  Google Scholar 

  • Parthemore, J., & Whitby, B. (2014). Moral agency, moral responsibility, and artifacts: What existing artifacts fail to achieve (and why), and why they, nevertheless, can (and do!) make moral claims upon Us. International Journal of Machine Consciousness, 6, 141–161.

    Article  Google Scholar 

  • Popa, E. (2021). Human goals are constitutive of agency in artificial intelligence (AI). Philosophy & Technology, 34, 1731–1750.

    Article  Google Scholar 

  • Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology, 34, 1057–1084.

    Article  Google Scholar 

  • Savulescu, J., & Maslen, H. (2015). Moral enhancement and artificial intelligence: Moral AI? In J. Romportl, E. Zackova, & J. Kelemen (Eds.), Beyond artificial intelligence: The Disappearing human-Machine Divide (pp. 79–95). Springer.

    Google Scholar 

  • Sison, A. J. G., & Redín, D. M. (2021). A Neo-Aristotelian perspective on the need for artificial moral agents (AMAs). AI & Society. https://doi.org/10.1007/s00146-021-01283-0

  • Smilansky, S. (2000). Free will and illusion. Oxford University Press.

    Google Scholar 

  • Smythe, T. W. (1999). Moral responsibility. The Journal of Value Inquiry, 33, 493–506.

    Article  Google Scholar 

  • Sparrow, R. (2021). Why machines cannot be moral. AI & Society. https://doi.org/10.1007/s00146-020-01132-6

  • Strawson, P. F. (1962). Freedom and resentment. Proceedings of the British Academy, 48, 1–25.

    Google Scholar 

  • Strawson, G. (1994). The impossibility of moral responsibility. Philosophical Studies, 75, 5–24.

    Article  Google Scholar 

  • Sunstein, C. R. (2005). Moral heuristics. Behavioral Brain Sciences, 28, 531–573.

    Article  Google Scholar 

  • Taddeo, M., McNeish, D., Blanchard, A., & Edgar, E. (2021). Ethical principles for artificial intelligence in national defence. Philosophy & Technology, 34, 1707–1729.

    Article  Google Scholar 

  • Tigard, D. W. (2021a). There is no techno-responsibility gap. Philosophy & Technology, 34, 589–607.

    Article  Google Scholar 

  • Tigard, D. W. (2021b). Artificial moral responsibility: How we can and cannot hold machines responsible. Cambridge Quarterly of Healthcare Ethics, 30, 435–447.

    Article  Google Scholar 

  • Tonkens, R. (2012). Out of character: On the creation of virtuous machines. Ethics and Information Technology, 14, 137–149.

    Article  Google Scholar 

  • Uszkai, R., Voinea, C., & Gibea, T. (2021). Responsibility attribution problems in companies: Could an artificial moral advisor solve this? In I. Popa, C. Dobrin, & N. Ciocoiu (Eds.), Proceedings of the 15th International Management Conference (pp. 951–960). ASE University Press.

    Google Scholar 

  • Vallor, S. (2015). Moral deskilling and upskilling in a new machine age: Reflections on the ambiguous future of character. Philosophy & Technology, 28, 107–124.

    Article  Google Scholar 

  • Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press.

    Book  Google Scholar 

  • Voinea, C., Vică, C., Mihailov, E., & Săvulescu, J. (2020). The Internet as cognitive enhancement. Science and Engineering Ethics, 26, 2345–2362. https://doi.org/10.1007/s11948-020-00210-8

    Article  Google Scholar 

  • Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford University Press.

    Google Scholar 

  • Warmke, B. (2011). Moral responsibility invariantism. Philosophia, 39, 179–200.

    Article  Google Scholar 

  • Widerker, D., & McKenna, M. (Eds.). (2003). Moral responsibility and alternative possibilities. Ashgate Publishing Limited.

    Google Scholar 

  • Williams, G. (2012). Responsibility. In Encyclopedia of Applied Ethics (pp. 821–828). Academic Press.

    Chapter  Google Scholar 

  • Woodward, P. A. (2007). Frankfurt-type cases and the necessary conditions for moral responsibility. The Journal of Value Inquiry, 41, 325–332.

    Article  Google Scholar 

  • Zimmerman, M. J. (1985). Intervening agents and moral responsibility. The Philosophical Quarterly, 35, 347–358.

    Article  Google Scholar 

  • Zimmerman, M. J. (1997). Moral responsibility and ignorance. Ethics, 107, 410–426.

    Article  Google Scholar 

Download references

Funding

This work was supported by a grant of the Romanian Ministry of Education and Research, CNCS-UEFISCDI, project number PN-III-P1-1.1-TE-2019-1765, within PNCDI III, awarded for the research project Collective moral responsibility: from organizations to artificial systems. Re-assessing the Aristotelian framework, implemented within CCEA and ICUB, University of Bucharest (2021–2022).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mihaela Constantinescu.

Ethics declarations

Conflict of Interest

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on AI and Responsibility

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Constantinescu, M., Vică, C., Uszkai, R. et al. Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors. Philos. Technol. 35, 35 (2022). https://doi.org/10.1007/s13347-022-00529-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13347-022-00529-z

Keywords

Navigation