Skip to main content

A 21st-Century Ethical Hierarchy for Robots and Persons: \(\mathscr {E \! H}\)

  • Chapter
  • First Online:
A World with Robots

Part of the book series: Intelligent Systems, Control and Automation: Science and Engineering ((ISCA,volume 84))

Abstract

I introduce and propose the ethical hierarchy (\(\mathscr {E \! H}\)) into which can be placed robots and humans in general. This hierarchy is catalyzed by the question: Can robots be more moral than humans? The light shed by \(\mathscr {E \! H}\) reveals why an emphasis on legal obligation for robots, while not unwise at the moment, is inadequate, and why at least the vast majority of today’s state-of-the-art deontic logics are morally inexpressive, whether they are intended to formalize the ethical behavior of robots or persons.

The work that gave rise to this short paper was enabled by generous and ongoing support from the U.S. Office of Naval Research; see ‘Acknowledgments.’ I owe a special debt to Dan Messier and Bertram Malle for pressing the “Can robots be more moral than humans?” question, which catalyzed my thought that that query can serve as a laic portal to consideration of the hierarchy presented synoptically herein. I’m deeply grateful as well to two anonymous referees. Finally, I thank Isabel Ferreira and João Sequeira for their leadership and sedulous work on the organizational and logistical side of the house.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For instance, a full specification of the hierarchy requires systematic consideration of intrinsic value, as e.g. set out in Chisholm (1986) (since intrinsic value in a Leibnizian metaphysical sense is in \(\mathscr {E \! H}\) the penultimate ground of the classification of actions (the ultimate being God himself)). Note along this line that despite what I say below rather optimistically about \(\mathscr {L}_{\mathscr {E \! H}}\), the fact is that, according to Chisholm and Leibniz, unless a deontic logic grounds the systematization of action in the formalization of intrinsic goodness (and badness), that logic will be incomplete.

  2. 2.

    For readers who may be interested, arguments in support of the claims in the present paragraph can e.g. be found in Bringsjord and Zenzen (2003), Bringsjord et al. (2006).

  3. 3.

    Chisholm (1982, p. 99) points out that Höfler had the deontic square of opposition in 1885.

  4. 4.

    One option is of course to supplant \(\exists \) with \(\exists ^{=1}\).

  5. 5.

    Anyone who has stood atop Pointe du Hoc and pondered the self-sacrifice of the Rangers who battled the Nazis there will confront the stark reality that supererogation was required to vanquish Hitler. Leibniz would say that the pursuit of such victory makes no sense if there is no God and no afterlife (for reasons explained in Youpa 2013)—but this claim is one left aside here. I note only that Leibniz thought it was easy enough to prove God’s existence, so for him, an ethics that presupposed God’s existence was in no way scientifically problematic.

  6. 6.

    I don’t have the space to consider the evil actions in question; Chisholm (1982) provides some examples. By the way, it seems to me very likely that robots capable of suberogatory actions will prove to be quite useful in espionage, but this topic cannot be discussed the present short paper. Readers interested in this direction are advised to begin with (Clark 2008).

  7. 7.

    The trio isn’t only incomplete, but is just plain unacceptable. A robot medic or surgeon would routinely need to harm humans in order to save them. In saying this, I narrowly condemn Asimov’s trio only. Ethically sophisticated contemporary engineers have worked out avenues by which robots can trade short-term harm for longer-term good; see e.g. Winfield et al. (2014).

  8. 8.

    These papers thus provide a rigorous deductive case for a position at odds with the Tallinn Manual on the International Law Applicable to Cyber Warfare Schmitt 2013.

  9. 9.

    In the human sphere, such a rescue would clearly fall into \(\mathscr {S}^{ up 2}\). For reasons pertaining to A- versus P-consciousness and the imaginary robot, I classify the rescue as a \(\mathscr {S}^{ up 1}\) action.

  10. 10.

    Thoroughgoing Kantians might resist \(\mathscr {E \! H}\), and the robot ethics and robot-ethics engineering that seems to naturally flow from it (because Kantian/deontological ethical theories are obligation-myopic). This is an issue I’m prepared to address—but not in this short paper. Robot ethics as it relates to Kant should in my opinion begin with study of (Ganascia 2007), (Powers 2006).

  11. 11.

    Technically, what I’ve said here is incorrect, since some numerical quantifiers do work just fine with deduction. Example, from \(\exists ^{\ge k}x \phi \) we can deduce \(\exists x \phi \).

  12. 12.

    Those familiar with the quantifier-based version of the Arithmetic Hierarchy will wonder whether \(\mathscr {E \! H}\) can likewise be built crisply via layered quantification. The answer, it seems to me, is Yes.

  13. 13.

    There are in fact two deep lacunae in what has been presented: two sub-parts of the hierarchy that are flat-out missing, one toward the endpoint of moral perfection, and one toward the endpoint of the diabolical. Both lacunae pertain to intelligence: it seems at least prima facie untenable to leave the level of intelligence of ethical agents out of systematic investigation of a continuum of ethical “grade”.

  14. 14.

    Within the robot-ethics project of which my logicist work is a part (see Acknowledgements), the empirical investigation of moral competence led by Malle can perhaps explore “norms” that cover not only what might naturally be classified within deontic logics as obligations, but also what conventional attitudes toward both levels 1 and 2 of supererogation in \(\mathscr {E \! H}\). I wonder whether for example the everyday concept of blame, under exploration by Malle et al. (2012), extends to supererogation.

References

  • Arkin R (2009) Governing lethal behavior in autonomous robots. Chapman and Hall/CRC, New York

    Book  Google Scholar 

  • Arkoudas K, Bringsjord S, Bello P (2005) Toward ethical robots via mechanized deontic logic. In: Machine ethics: papers from the AAAI fall symposium; FS–05–06. American Association for Artificial Intelligence, Menlo Park, CA, pp 17–23. http://www.aaai.org/Library/Symposia/Fall/fs05-06.php

  • Bello P (2005) Toward a logical framework for cognitive effects-based operations: some empirical and computational results. PhD thesis, Rensselaer Polytechnic Institute (RPI), Troy, NY

    Google Scholar 

  • Bello P, Bringsjord S (2013) On how to build a moral machine. Topoi 32(2):251–266, http://kryten.mm.rpi.edu/Topoi.MachineEthics.finaldraft.pdf

  • Block N (1995) On a confusion about a function of consciousness. Behav Brain Sci 18:227–247

    Article  Google Scholar 

  • Bringsjord S (1992) What robots can and can’t be. Kluwer, Dordrecht

    Book  MATH  Google Scholar 

  • Bringsjord S (1999) The zombie attack on the computational conception of mind. Philos Phenomenol Res 59(1):41–69

    Article  Google Scholar 

  • Bringsjord S (2007) Offer: one billion dollars for a conscious robot. If you’re honest, you must decline. J Conscious Stud 14(7):28–43. http://kryten.mm.rpi.edu/jcsonebillion2.pdf

  • Bringsjord S (2008a) Declarative/logic-based cognitive modeling. In: Sun R (ed) The handbook of computational psychology. Cambridge University Press, Cambridge, pp 127–169. http://kryten.mm.rpi.edu/sb_lccm_ab-toc_031607.pdf

  • Bringsjord S (2008b) The logicist manifesto: at long last let logic-based ai become a field unto itself. J Appl Logic 6(4):502–525. http://kryten.mm.rpi.edu/SB_LAI_Manifesto_091808.pdf

  • Bringsjord S, Ferrucci D (1998) Logic and artificial intelligence: divorced, still married, separated? Minds Mach 8:273–308

    Article  Google Scholar 

  • Bringsjord S, Govindarajulu NS (2013) Toward a modern geography of minds, machines, and math. In: Müller VC (ed) Philosophy and theory of artificial intelligence, studies in applied philosophy, Epistemology and rational ethics, vol 5. Springer, New York, pp 151–165. doi:10.1007/978-3-642-31674-6_11, http://www.springerlink.com/content/hg712w4l23523xw5

  • Bringsjord S, Licato J (2015a) By disanalogy, cyberwarfare is utterly new. Philos Technol 28(3):339–358. http://kryten.mm.rpi.edu/SB_JL_cyberwarfare_disanalogy_DRIVER_final.pdf

  • Bringsjord S, Licato J (2015b) Crossbows, von Clauswitz, and the eternality of software shrouds: reply to christianson. Philos Technol 28(3):365–367. http://kryten.mm.rpi.edu/SB_JL_on_BC.pdf, The url here is to a preprint only

  • Bringsjord S, Taylor J (2012) The divine-command approach to robot ethics. In: Lin P, Bekey G, Abney K (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge, pp 85–108. http://kryten.mm.rpi.edu/Divine-Command_Roboethics_Bringsjord_Taylor.pdf

  • Bringsjord S, Zenzen M (2003) Superminds: people harness hypercomputation, and more. Kluwer Academic Publishers, Dordrecht

    Book  MATH  Google Scholar 

  • Bringsjord S, Arkoudas K, Bello P (2006) Toward a general logicist methodology for engineering ethically correct robots. IEEE Intell Syst 21(4):38–44. http://kryten.mm.rpi.edu/bringsjord_inference_robot_ethics_preprint.pdf

  • Bringsjord S, Kellett O, Shilliday A, Taylor J, van, Heuveln B, Yang Y, Baumes J, Ross K, (2006) A New Gödelian argument for hypercomputing minds based on the busy beaver problem. Appl Math Comput 176:516–530

    Google Scholar 

  • Bringsjord S, Taylor J, Shilliday A, Clark M, Arkoudas K (2008) Slate: an argument-centered intelligent assistant to human reasoners. In: Grasso F, Green N, Kibble R, Reed C (eds) Proceedings of the 8th international workshop on computational models of natural argument (CMNA 8). University of Patras, Patras, Greece, pp 1–10. http://kryten.mm.rpi.edu/Bringsjord_etal_Slate_cmna_crc_061708.pdf

  • Chellas BF (1980) Modal logic: an introduction. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Chisholm R (1963) Contrary-to-duty imperatives and deontic logic. Analysis 24:33–36

    Article  Google Scholar 

  • Chisholm R (1982) Supererogation and offence: a conceptual scheme for ethics. In: Chisholm R (ed) Brentano and meinong studies. Humanities Press, Atlantic Highlands, pp 98–113

    Google Scholar 

  • Chisholm R (1986) Brentano and intrinsic value. Cambridge University Press, Cambridge

    Google Scholar 

  • Clark M (2008) Cognitive illusions and the lying machine. PhD thesis, Rensselaer Polytechnic Institute (RPI)

    Google Scholar 

  • Davis M, Sigal R, Weyuker E (1994) Computability, complexity, and languages: fundamentals of theoretical computer science. Academic Press, New York

    Google Scholar 

  • Ganascia JG (2007) Modeling ethical rules of lying with answer set programming. Ethics Inf Technol 9:39–47

    Article  Google Scholar 

  • Govindarajulu NS, Bringsjord S (2015) Ethical regulation of robots must be embedded in their operating systems. In: Trappl R (ed) A construction manual for robots’ ethical systems: requirements, methods, implementations. Springer, Basel, pp 85–100. http://kryten.mm.rpi.edu/NSG_SB_Ethical_Robots_Op_Sys_0120141500.pdf

  • Ladd J (1957) The structure of a moral code. Harvard University Press, Cambridge

    Google Scholar 

  • Malle BF, Guglielmo S, Monroe A (2012) Moral, cognitive, and social: the nature of blame. In: Forgas J, Fiedler K, Sedikides C (eds) Social thinking and interpersonal behavior. Psychology Press, Philadelphia, pp 313–331

    Google Scholar 

  • Powers T (2006) Prospects for a Kantian machine. IEEE Intell Syst 21:4

    Article  Google Scholar 

  • Saptawijaya A, Pereira LM (2016) The potential of logic programming as a computational tool to model morality. In: Trappl R (ed) A construction manual for robots’ ethical systems, Springer, Cham. http://centria.di.fct.unl.pt/~lmp/publications/online-papers/ofai_book.pdf

  • Schermerhorn P, Kramer J, Brick T, Anderson D, Dingler A, Scheutz M (2006) DIARC: A testbed for natural human-robot interactions. In: Proceedings of AAAI 2006 mobile robot workshop

    Google Scholar 

  • Scheutz M, Arnold T (2016) Feats without heroes: norms, means, and ideal robotic action. Front Robot AI

    Google Scholar 

  • Schmitt M (ed) (2013) Tallinn manual on the international law applicable to cyber warfare. Cambridge University Press, Cambridge, UK, This volume was first published in 2011. While M Schmitt is the General Editor, there were numerous contributors, falling under the phrase ‘International Group of Experts at the Invitation of the NATO Cooperative Cyber Defence Centre of Excellence’

    Google Scholar 

  • Suppes P (1972) Axiomatic set theory. Dover Publications, New York

    MATH  Google Scholar 

  • Urmson JO (1958) Saints and heroes. In: Melden A (ed) Essays in moral philosophy. University of Washington Press, Seattle, pp 198–216

    Google Scholar 

  • von Wright G (1951) Deontic logic. Mind 60:1–15

    Article  Google Scholar 

  • Winfield A, Blum C, Liu W (2014) Towards an ethical robot: internal models, consequences and ethical action selection. In: Mistry M, Leonardis A, Witkowski M, Melhuish C (eds) Advances in autonomous robotics systems. Lecture notes in computer science (LNCS), vol 8717. Springer, Cham, pp 85–96

    Google Scholar 

  • Youpa A (2013) Leibniz’s ethics. In: Zalta E (ed) The stanford encyclopedia of philosophy, the metaphysics research lab, center for the study of language and information. Stanford University. http://plato.stanford.edu/entries/leibniz-ethics

Download references

Acknowledgements

Bringsjord is profoundly grateful for support provided by two grants from U.S. ONR to explore robot ethics, and to co-investigators M. Scheutz (PI, MURI; Tufts University), B. Malle (Co-PI, MURI; Brown University), M. Sei (Co-PI, MURI; RPI), and R. Sun (PI, Moral Dilemmas; RPI) for invaluable collaboration of the highest order.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Selmer Bringsjord .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Bringsjord, S. (2017). A 21st-Century Ethical Hierarchy for Robots and Persons: \(\mathscr {E \! H}\) . In: Aldinhas Ferreira, M., Silva Sequeira, J., Tokhi, M., E. Kadar, E., Virk, G. (eds) A World with Robots. Intelligent Systems, Control and Automation: Science and Engineering, vol 84. Springer, Cham. https://doi.org/10.1007/978-3-319-46667-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46667-5_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46665-1

  • Online ISBN: 978-3-319-46667-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics