Skip to main content
Log in

Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism

  • Original Research/Scholarship
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory—‘ethical behaviourism’—which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven’t done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of ‘procreative beneficence’ towards robots.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The term ‘duty’ is used in this paper in a sense that is interchangeable with cognate terms such as ‘responsibility’ or ‘obligation’. It used to denote a normative requirement or restriction placed on someone’s conduct that means that this conduct is not a matter of personal preference but is, rather, ethically mandated.

  2. To see what Sophia is like go to https://www.hansonrobotics.com/sophia/.

  3. As Gunkel 2018b, p. 116 points out, the gesture was not completely unprecedented. The Japanese have recognized a non-legal kind of robot citizenship in the past for artificial creatures such as Paro (the robotic seal).

  4. Porter’s tweet is available at: https://twitter.com/SColesPorter/status/951042066561323008 (accessed 10/7/2018).

  5. The argument has some similarities with the ‘no-relevant-difference’ argument presented by Schwitzgebel and Garza (2015). But their argument is not grounded in the behaviourist view and is open to multiple possible understandings of a ‘relevant difference’. It also encourages the search for disconfirming evidence over confirming evidence.

  6. Kant was famously unwilling to accept that animals had moral status and drew a sharp distinction between practical reason (from which he derived his moral views) and theoretical reason (from which he derived his epistemological/metaphysical views). But others who have adopted a Kantian approach to philosophy have been more open to expanding the moral circle, e.g. Schopenhauer (on this see Puryear 2017). It is also worth noting, in passing, that the position adopted in the text has another affinity with Kantianism in that, just as Kant tended to reduce the metaphysical to the epistemological, ethical behaviourism tends to reduces the ethical to the epistemological. The author is indebted to an anonymous reviewer and Sven Nyholm for helping him to understand how Kant’s reasoning relates to the argument defended in the text.

  7. For a comprehensive discussion of the potential metaphysical grounds for moral status, see Jaworska and Tannenbaum 2018. For specific discussions of consciousness, preference-satisfaction and personhood as grounds of moral status see Sebo (2018), Singer (2009), Regan 1983) and Warren (2000). For a discussion of the moral foundations of rights see Sumner (1987) and, as applied to robot rights, Gunkel (2018b).

  8. An anonymous reviewer asks: what if it was a confirmed zombie? The ethical behaviourist would respond that this is an impossible hypothetical: one could not have confirmatory evidence of a kind that would suffice to undermine the behavioural evidence.

  9. One potential consequence of ethical behaviourism is that it should make us more skeptical of theories of moral status that purport to rely on highly uncertain or difficult to know properties. For example, some versions of sentientism hold an entity can be sentient without displaying any outward signs of sentience. But if this is correct, radical uncertainty about moral status might result since there is no behavioural evidence that could be pointed to that could confirm or disconfirm sentience. An ethical behaviourist would reject this approach to understanding sentience on the grounds that for sentience to work as a ground for moral status it would have to be knowable through some outward sign of sentience. For a longer discussion of sentience and moral uncertainty see Sebo (2018).

  10. A skeptic of evolution (e.g. a proponent of intelligent design) might dispute this, but if one believes in an intelligent designer then arguably one should perceive less of a morally significant difference between the efficient causes of humans and robots: both will be created by intelligent designers. That said, a theistic intelligent designer would have distinctive properties (omniscience, omnibenevolence) and those might make a difference to moral status. This issue is raised again in connection with the final cause objection, below.

  11. If anything the opposite might be true. Theists might wish to ground moral status in non-observable metaphysical properties like the presence of a soul, but such properties run into the same problems as the strong form of sentientism (discussed in footnote 9). There are other possible religious approaches to moral status but religious believers confront similar epistemic limits to non-believers in the practical implementation of those approaches and this constrains how they can interpret and apply theories of moral status.

  12. Connected to this, an anonymous reviewer also points out that Kant (unlike many modern Kantians), in the Metaphysics of Morals, argued that although servitude was permissible servants were still owed duties of moral respect.

  13. It might also be the case, as an anonymous reviewer points out, that our historical forebears conveniently overlooked or ignored the moral relevance of performative equivalency because doing so served other (e.g. economic) interests.

  14. It should also be noted that the historical mistreatment of groups of human beings would call into question other grounds of moral status such as ontology and efficient cause. So history does not speak against the performative equivalency standard any more than it speaks against those standards.

  15. Gunkel has subsequently (Gunkel 2018a, b) expanded the analysis to include robots.

  16. That said, the performative equivalency view is not necessarily in tension with the relational view because the fact that people want to give robots names, invite them into their homes, and make them human-like in other ways is probably what drives people to create robots that are performatively equivalent. The author is indebted to an anonymous reviewer for suggesting this point.

  17. Schwitzgebel and Garza (2015) have an extended discussion of AI-fragility (or the lack thereof) and what it might mean for moral status. They initially agree with the position adopted in this article but also suggest that certain aspects of machine ontology might warrant greater moral protection.

  18. Contrariwise, if replaceability undermines the need for certain kinds of moral protections, then perhaps we need a new set of moral norms for entities that are easily replaceable. But this new set of norms would then apply to humans just as much as it would apply to robots.

  19. Article 12 of the UNCRPD recognises the right to equal recognition before the law of persons with disabilities and includes, specifically, the right to recognition of legal capacity (roughly: the capacity to make decisions on their own behalf).

  20. For an example of how this objection might play out, consider the controversy that Rebecca Tuvel’s article ‘In Defence of Transracialism’ (2017) provoked when she argued that transgenderism and transracialism could be viewed as analogous phenomena.

  21. Bryson may think there are other moral/ethical benefits that outweigh these costs in the case of humans—it is not clear from her writings. What is clear is that she thinks that human well-being trumps robotic well-being.

  22. It is also sometimes criticized for being redolent of eugenics. However, Savulescu would argue that there are significant moral differences between what he is proposing and the morally repugnant policies of historical eugenicists. He is not claiming that people ought to be sterilized or prevented from having children in the interests of racial or cognitive purity. He is focusing on the need to benefit potential offspring; not on harming or restricting potential parents.

  23. Matthijs Maas has suggested to the present author that the hacking risk is quite severe. As he sees it “if a robot capable of suffering gets hacked, this would allow the attacker to inflict massively scalable, unbounded suffering or indignity on the AI (e.g. by speeding up its clock-time, making it suffer subjective millennia of humiliation). The amount of suffering that could be inflicted on a robot is therefore much higher than that which could be inflicted on a human”. This might give very good reason not to create robots with moral status.

References

  • Arstein-Kerslake, A., & Flynn, E. (2017). The right to legal agency: Domination, disability and the protections of Article 12 of the Convention on the Rights of Persons with Disabilities. International Journal of Law in Context, 13(1), 22–38.

    Article  Google Scholar 

  • Bennet, M. R., Dennett, D., Hacker, P. M. S., & Searle, J. (2007). Neuroscience and philosophy: Brain, mind, and language. Oxford: Blackwell Publishing.

    Google Scholar 

  • Bennett, M. R., & Hacker, P. M. S. (2003). Philosophical foundations of neuroscience. Oxford: Blackwell Publishing.

    Google Scholar 

  • Bryson, J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issues (pp. 63–74). Amsterdam: John Benjamins.

    Chapter  Google Scholar 

  • Bryson, J. (2018). Patiency is not a virtue: The design of intelligent systems and systems of ethics. Ethics and Information Technology, 20, 15–26. https://doi.org/10.1007/s10676-018-9448-6.

    Article  Google Scholar 

  • Bryson, J., Diamantis, M., & Grant, T. (2017). Of, for and by the people: The legal lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273–291.

    Article  Google Scholar 

  • Carter, A., & Palermos, O. (2016). Is having your computer compromised a personal assault? The ethics of extended cognition. Journal of the American Philosophical Association, 2(4), 542–560.

    Article  Google Scholar 

  • Chalmers, D. (1996). The conscious mind. Oxford: Oxford University Press.

    Google Scholar 

  • Coeckelbergh, M. (2012). Growing moral relations: Critique of moral status ascription. New York: Palgrave MacMillan.

    Book  Google Scholar 

  • Coeckelbergh, M., & Gunkel, D. (2014). Facing animals: A relational, other-oriented approach to moral standing. Journal of Agricultural and Environmental Ethics, 27(5), 715–733.

    Article  Google Scholar 

  • Coeckelbergh, M., & Gunkel, D. (2016). Response to “The problem of the question about animal ethics” by Michal Piekarski. Journal of Agricultural and Environmental Ethics, 29(4), 717–721.

    Article  Google Scholar 

  • Danaher, J. (2018). Why we should create artificial offspring: Meaning and the collective afterlife. Science and Engineering Ethics, 24(4), 1097–1118.

    Article  Google Scholar 

  • Graham, G. (2015). Behaviorism. Stanford Encyclopedia of the Philosophy, available at https://plato.stanford.edu/entries/behaviorism/. Accessed 10 July 2018.

  • Gruen, L. (2017). The moral status of animals. In Zalta (Ed.), Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/moral-animal/.

  • Guerrero, A. (2007). Don’t know, don’t kill: Moral ignorance, culpability, and caution. Philosophical Studies, 136, 59–97.

    Article  Google Scholar 

  • Gunkel, D. (2011). The machine question. Cambridge, MA: MIT Press.

    Google Scholar 

  • Gunkel, D. (2018a). The other question: Can and should robots have rights? Ethics and Information Technology, 20, 87–99.

    Article  Google Scholar 

  • Gunkel, D. (2018b). Robot rights. Cambridge, MA: MIT Press.

    Book  Google Scholar 

  • Hare, S., & Vincent, N. (2016). Happiness, cerebroscopes and incorrigibility: The prospects for neuroeudaimonia. Neuroethics, 9(1), 69–84.

    Article  Google Scholar 

  • Hauskeller, M. (2017). Automatic sweethearts for transhumanists. In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications. Cambridge, MA: MIT Press.

    Google Scholar 

  • Holland, A. (2016). The case against the case for procreative beneficence. Bioethics, 30(7), 490–499.

    Article  Google Scholar 

  • Jaworska, A., & Tannenbaum, J. (2018). The grounds of moral status. In Zalta (Ed.), Stanford encyclopedia of philosophy, available at https://plato.stanford.edu/entries/grounds-moral-status/.

  • Kaczor, C. (2011). The ethics of abortion. London: Routledge.

    Google Scholar 

  • Leong, B., & Selinger, E. (2019). Robot eyes wide shut: Understanding dishonest anthropomorphism. In 2019 Proceedings of the Association for Computing Machinery’s Conference on Fairness, Accountability, and Transparency, pp. 299–308.

  • Levinas, E. (1969). Totality and infinity: An essay on exteriority (A. Lingis, Trans.). Pittsburgh, PA: Duquesne University.

  • Levy, D. (2009). The ethical treatment of artificially conscious robots. International Journal of Social Robotics, 1(3), 209–216. https://doi.org/10.1007/s12369-009-0022-6.

    Article  Google Scholar 

  • Lockhart, T. (2000). Moral uncertainty and its consequences. Oxford: OUP.

    Google Scholar 

  • Moller, D. (2011). Abortion and moral risk. Philosophy, 86, 425–443.

    Article  Google Scholar 

  • Neely, E. L. (2014). Machines and the moral community. Philosophy & Technology, 27(1), 97–111. https://doi.org/10.1007/s13347-013-0114-y.

    Article  Google Scholar 

  • Nyholm, S., & Frank, L. E. (2017). From sex robots to love robots: Is mutual love with a robot possible? In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications. Cambridge, MA: MIT Press.

    Google Scholar 

  • Overall, C. (2011). Why have children? The ethical debate. Cambridge, MA: MIT Press.

    Google Scholar 

  • Pardo, M., & Patterson, D. (2013). Minds, brains and law. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Puryear, S. (2017). Schopenhauer on the rights of animals. European Journal of Philosophy, 25(2), 250–269.

    Article  Google Scholar 

  • Raoult, A., & Yampolskiy, R. (2018). Reviewing tests for machine consciousness. Journal of Consciousness Studies, forthcoming—available at https://www.researchgate.net/publication/325498266_Reviewing_Tests_for_Machine_Consciousness. Accessed 28 March 2019.

  • Regan, T. (1983). The case for animal rights. Berkeley: University of California Press.

    Google Scholar 

  • Saunders, B. (2015). Why procreative preferences may be moral—And why it may not matter if they aren’t. Bioethics, 29(7), 499–506.

    Article  Google Scholar 

  • Saunders, B. (2016). First, do no harm: Generalized procreative non-maleficence. Bioethics, 31, 552–558.

    Article  Google Scholar 

  • Savulescu, J. (2001). Procreative beneficence: Why we should select the best children. Bioethics, 15, 413–426.

    Article  Google Scholar 

  • Schwitzgebel, E., & Garza, M. (2015). A defense of the rights of artificial intelligences. Midwest Studies in Philosophy, 39(1), 89–119. https://doi.org/10.1111/misp.12032.

    Article  Google Scholar 

  • Sebo, J. (2018). The moral problem of other minds. The Harvard Review of Philosophy, 25, 51–70. https://doi.org/10.5840/harvardreview20185913.

    Article  Google Scholar 

  • Singer, P. (1981). The expanding circle. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Singer, P. (2009). Speciesism and moral status. Metaphilosophy, 40(3–4), 567–581.

    Article  Google Scholar 

  • Sparrow, R. (2012). Can machines be people? Reflections on the turing triage test. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 301–316). Cambridge, MA: MIT Press.

    Google Scholar 

  • Stone, Z. (2017). Everything you need to know about Sophia, The World’s First Robot Citizen. Forbes 7th November 2017, available at https://www.forbes.com/sites/zarastone/2017/11/07/everything-you-need-to-know-about-sophia-the-worlds-first-robot-citizen/#4e76f02b46fa. Accessed 10 July 2018.

  • Sumner, L. (1987). The moral foundations of rights. Oxford: Oxford University Press.

    Google Scholar 

  • Turing, A. (1950). Computing machinery and intelligence. Mind, 49, 433–460.

    Article  Google Scholar 

  • Tuvel, R. (2017). Defence of transracialism. Hypatia, 32(2), 263–278.

    Article  Google Scholar 

  • Vincent, J. (2017). Pretending to give robots citizenship helps no one. The Verge 30th October 2017, available at https://www.theverge.com/2017/10/30/16552006/robot-rights-citizenship-saudi-arabia-sophia. Accessed 10 July 2018.

  • Warren, M. A. (2000). Moral status: Obligations to persons and other things. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Weatherson, B. (2014). Running risks morally. Philosophical Studies, 167, 141–163.

    Article  Google Scholar 

Download references

Acknowledgements

The author would like to thank Matthijs Maas, Sven Nyholm and four anonymous reviewers and for helpful comments on earlier drafts of this article. He would also like to thank audiences at NUI Galway and Manchester University for enduring earlier presentations of its core argument.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Danaher.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Danaher, J. Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism. Sci Eng Ethics 26, 2023–2049 (2020). https://doi.org/10.1007/s11948-019-00119-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11948-019-00119-x

Keywords

Navigation