Abstract
Recent advancements in AI have prompted a large number of AI ethics guidelines published by governments and nonprofits. While many of these papers propose concrete or seemingly applicable ideas, few philosophically sound proposals are made. In particular, we observe that the line of questioning has often not been examined critically and underlying conceptual problems not always dealt with at the root. In this paper, we investigate the nature of ethical AI systems and what their moral status might be by first turning to the notions of moral agency and patience. We find that their explication would come at a too high cost which is why we, second, articulate a different approach that avoids vague and ambiguous concepts or the problem of other minds. Third, we explore the impact of our philosophical and conceptual analysis on the regulatory landscape, make this link explicit, and finally propose a set of promising policy steps.
Similar content being viewed by others
Notes
The first Level-3 autonomous car hit the roads in 2018 (cf. https://www.sae.org/publications) and several OEMs claim Level-5 autonomy to become a reality in 5–20 years (cf. https://www.techrepublic.com/article/ford-plans-to-mass-produce-a-no-driver-required-autonomous-vehicle-by-2021/; https://electrek.co/2017/04/29/elon-musk-tesla-plan-level-5-full-autonomous-driving/).
A functionalist, by contrast, would remain agnostic and simply decide not to decide about the state of machine consciousness in lieu of attempting to sort out the seemingly irreducible problem of “other minds”. This, however, is just another position one can take and, thereby, demonstrates how much room for intellectual disputes such metaphysical questions and problems leave.
Even though we would reject to operate with such metaphysically loaded concepts like personhood (see Sect. 4.1), we appreciate the behavioristic approach embraced by MacDorman and Cowley.
References
Birch TH (1993) Moral considerability and universal consideration. Environ Ethics 15:313–332
Bostrom N (2016) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford
Channell DF (1991) The vital machine: a study of technology and organic life. Oxford University Press, Oxford
Coeckelbergh M (2018) What do we mean by a relational ethics? Growing a relational approach to the moral standing of plants, robots and other non-humans. In: Kallhoff A, Di Paola M, Schörgenhumer M (eds) Plant ethics. Routledge, London, pp 110–121
Dräger J, Müller-Eiselt R (2019) Wir und die Intelligenten Maschinen. Deutsche Verlags-Anstalt, München
Dunn A (2019) When hype is harmful: why what we think is possible matters. Medium. https://medium.com/swlh/whenhype-is-harmful-why-what-we-think-is-possible-matters-e7988db6f643
European Laboratory for Learning and Intelligent Systems (ELLIS) (2019) Letter. https://ellis.eu/letter. Accessed 03 Oct 2019
Financial Times (2019) EU publishes guidelines on ethical artificial intelligence. https://www.ft.com/content/32032c8a-5a0b-11e9-9dde-7aedca0a081a. Accessed 24 Jun 2019
Fischer S, Wenger A (2019) Ein neutraler Hub für KI-Forschung. In: Policy perspectives, vol 7/2. Center for Security Studies (CSS), ETH Zurich. https://css.ethz.ch/content/dam/ethz/special-interest/gess/cis/center-for-securities-studies/pdfs/PP7-2_2019-D.pdf. Accessed 03 Oct 2019
Floridi L (1999) Information ethics: on the philosophical foundation of computer ethics. Ethics Inf Technol 1:37–56
Floridi L (2010) Information ethics. In: Floridi L (ed) Cambridge handbook of information and computer ethics. Cambridge University Press, Cambridge, pp 77–100
Floridi L, Sanders JW (2004) On the morality of artificial agents. Mind Mach 14:349–379
Gal D (2019) Perspectives and approaches in AI ethics: East Asia. In: Dubber M, Pasquale F, Das S (eds) Oxford handbook of ethics of artificial intelligence. Oxford University Press, Oxford
Good IJ (1965) The mystery of go. The New Scientist, pp 172–174
Hajdin M (1994) The boundaries of moral discourse. Loyola University Press, Chicago
Hanson FA (2009) Beyond the skin bag: on the moral responsibility of extended agencies. Ethics Inf Technol 11:91–99
Hao K (2019) We analyzed 16,625 papers to figure out where AI is headed next. MIT technology review. https://www.technologyreview.com/s/612768/we-analyzed-16625-papers-to-figure-out-where-ai-is-headed-next/. Accessed 24 Jun 2019
Harrison P (1992) Descartes on animals. Philos Quart 167:219–227
Heidegger M (1977) The question concerning technology. In: Lovitt W (ed) The question concerning technology and other essays. Harper & Row, New York, pp 3–35
High-Level Expert Group on AI (AI HLEG) (2019) Ethics guidelines for trustworthy AI. https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines. Accessed 03 Oct 2019
Hoffmann CH (2017) Structure-based explanatory modeling of risks. Towards understanding dynamic complexity in financial systems. Syst Res Behav Sci 34:728–745
Introna LD (2009) Ethics and the speaking of things. Theory Culture Society 26:398–419
Jobin A, Ienca M, Vayena E (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1:389–399
Kierkegaard S (1987) Either/or, part 2. Trans. Hong HV, Hong EH. Princeton University Press, Princeton
Kurzweil R (2005) The singularity is near. When humans transcend biology. Viking, New York
Levy D (2009) The Ethical treatment of artificially conscious robots. Int J Social Robot 1:209–216
MacDorman KF, Cowley SJ (2006) Long-term relationships as a benchmark for robot personhood. In: Proceedings of the IEEE international workshop on robot human interaction communication, pp 378–383
Mulgan G (2019) AI ethics and the limits of code(s). NESTA. https://www.nesta.org.uk/blog/ai-ethics-and-limits-codes. Accessed 29 Sept 2019
Müller VC (2019) Policy documents & institutions—ethical, legal and socio-economic issues of robotics and artificial intelligence. http://www.pt-ai.org/TG-ELS/policy. Accessed 24 Jun 2019
Noothigattu R, Gaikwad S, Awad E, Dsouza S, Rahwan I, Ravikumar P, Procaccia AD (2017) A voting-based system for ethical decision making. arXiv preprint arXiv:1709.06692
Putnam H (1964) Robots: machines or artificially created life? J Philos 61:668–691
Regan T (1983) The case for animal rights. University of California Press, Berkeley
Savva M, Kadian A, Maksymets O, Zhao Y, Wijmans E, Jain B, Straub J, Liu J, Koltun V, Malik J, Parikh D (2019) Habitat: a platform for embodied AI research. arXiv preprint arXiv:1904.01201
Singer P (1975) Animal liberation: a new ethics for our treatment of animals. New York Review of Books, New York
Sparrow R (2004) The turing triage test. Ethics Inf Technol 6:203–213
Spindler C, Hoffmann CH (2019) Data logistics and AI in insurance risk management. International Data Spaces Association. https://www.internationaldataspaces.org/wp-content/uploads/2019/08/IDSA-paper-Data-Logistics-and-AI-in-Insurance-Risk-Management.pdf. Accessed 03 Oct 2019
Städeli M (2019) In Zürich entsteht eine breite Allianz für Superdrohnen und Roboter. Zurich: NZZ am Sonntag. https://nzzas.nzz.ch/wirtschaft/kuenstliche-intelligenz-zuerich-soll-schweiz-voran-bringen-ld.1497241. Accessed 03 Oct 2019
Stahl BC (2006) Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics Inf Technol 8:205–213
Stix C (2019) A survey of the European Union’s artificial intelligence ecosystem. https://docs.wixstatic.com/ugd/ff3afe_1513c6bf2d81400eac182642105d4d6f.pdf. Accessed 26 Nov 2019
SwissCognitive (2019) CognitiveValley: ready for take off! www.swisscognitive.ch/2019/08/31/cognitivevalley_is_being-born/. Accessed 03 Oct 2019
Techcrunch (2017) Techcrunch, 2017. Saudi Arabia bestows citizenship on a robot named Sophia. https://techcrunch.com/2017/10/26/saudi-arabia-robot-citizen-sophia/. Accessed 24 Jun 2019
The Federal Council (2018) The pros and cons of artificial intelligence. https://www.admin.ch/gov/en/start/documentation/media-releases.msg-id-71639.html. Accessed 24 Jun 2019
The Guardian (2011) John McCarthy obituary. https://www.theguardian.com/technology/2011/oct/25/john-mccarthy. Accessed 24 Jun 2019
Towards Data Science (2018) Deep learning for self-driving cars using Nvidia’s research to build a CNN for autonomous driving in Pytorch. https://towardsdatascience.com/deep-learning-for-self-driving-cars-7f198ef4cfa2. Accessed 24 Jun 2019
Vinge V (1993) The coming technological singularity. In: Grüter T (ed) Klüger als wir? Auf dem Weg zur Hyperintelligenz. Springer, New York
Wallach W, Allen C (2009) Moral machines: teaching robots right from wrong. Oxford University Press, Oxford
Walsh T (2017) It’s alive. Edition Körber, Hamburg
Weber K (2019) Autonomie und Moralität als Zuschreibung. In: Rath M, Krotz F, Karmasin M (eds) Maschinenethik. Ethik in mediatisierten Welten. Springer, Wiesbaden
WIRED (2018) The AI cargo cult. The Myth of a Superhuman AI. https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/. Accessed 24 Jun 2019
Žižek S (2006) The parallax view. MIT Press, Cambridge
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
None of the authors have any competing interests in the manuscript.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Hoffmann, C.H., Hahn, B. Decentered ethics in the machine era and guidance for AI regulation. AI & Soc 35, 635–644 (2020). https://doi.org/10.1007/s00146-019-00920-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-019-00920-z