Abstract
The doctrine of double effect (\(\mathcal {{DDE}}\)) is an ethical principle that can account for human judgment in moral dilemmas: situations in which all available options have large good and bad consequences. We have previously formalized \(\mathcal {{DDE}}\) in a computational logic that can be implemented in robots. \(\mathcal {{DDE}}\), as an ethical principle for robots, is attractive for a number of reasons: (1) Empirical studies have found that \(\mathcal {{DDE}}\) is used by untrained humans; (2) many legal systems use \(\mathcal {{DDE}}\); and finally, (3) the doctrine is a hybrid of the two major opposing families of ethical theories (consequentialist/utilitarian theories versus deontological theories). In spite of all its attractive features, we have found that \(\mathcal {{DDE}}\) does not fully account for human behavior in many ethically challenging situations. Specifically, standard \(\mathcal {{DDE}}\) fails in situations wherein humans have the option of self-sacrifice. Accordingly, we present an enhancement of our \(\mathcal {{DDE}}\)-formalism to handle self-sacrifice; we end by looking ahead to future work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Full formalization of \(\mathcal {{DDE}}\) would include conditions expressing the requirement that the agent in question has certain emotions and lacks certain other emotions (e.g., the agent cannot have delectatio morosa). On the strength of Ghosh’s Felmë theory of emotion, which formalizes (apparently all) human emotions in the language of cognitive calculus as described in the present paper, we are actively working in this direction.
- 2.
The blue/red terminology is common in wargaming and offers in the minds of many a somewhat neutral way to talk about politically charged situations.
- 3.
We leave out the counterfactual condition \(\mathbf {C}_5\) as it is typically excluded in standard treatments of \(\mathcal {{DDE}}\).
- 4.
- 5.
A overview of this list is given lucidly in [16].
- 6.
- 7.
More precisely, we allow such formulae to be interpreted in this way. Strictly speaking, even the “meaning” of a material conditional such as \((\phi \wedge \psi ) \rightarrow \psi \), in our proof-theoretic orientation, is true because this conditional can be proved to hold in “background logic.” Readers interested in how background logic appears on the scene immediately when mathematical (extensional deductive) logic is introduced are encouraged to consult [8].
- 8.
The prover is available in both Java and Common Lisp and can be obtained at: https://github.com/naveensundarg/prover. The underlying first-order prover is SNARK, available at: http://www.ai.sri.com/~stickel/snark.html.
- 9.
- 10.
The code is available at https://goo.gl/JDWzi6. For further experimentation with and exploration of \(\mathcal {{DDE}}\), we are working on physical, 3D simulations, rather than only virtual simulations in pure software. Space constraints make it impossible to describe the “cognitive polysolid framework” in question (which can be used for simple trolley problems), development of which is currently principally the task of Matt Peveler.
References
Allsopp ME (2011) The doctrine of double effect in US law: exploring neil gorsuch’s analyses. Natl Cathol Bioeth Q 11(1):31–40
Arkoudas K, Bringsjord S (2008) Toward formalizing common-sense psychology: an analysis of the false-belief task. In: Ho TB, Zhou ZH (eds) Proceedings of the tenth pacific rim international conference on artificial intelligence (PRICAI 2008), Springer-Verlag, no. 5351 in Lecture Notes in Artificial Intelligence (LNAI), pp 17–29. http://kryten.mm.rpi.edu/KA_SB_PRICAI08_AI_off.pdf
Arkoudas K, Bringsjord S (2009) Propositional attitudes and causation. Int J Softw Inform 3(1):47–65. http://kryten.mm.rpi.edu/PRICAI_w_sequentcalc_041709.pdf
Bringsjord S (2017) A 21st-century ethical hierarchy for robots and persons: \(\cal{EH}\). In: A world with robots: international conference on robot ethics: ICRE 2015, Springer, Lisbon, Portugal, vol 84, p 47
Bringsjord S, Govindarajulu NS (2013) Toward a modern geography of minds, machines, and math. In: Müller VC (ed) Philosophy and theory of artificial intelligence, studies in applied philosophy, epistemology and rational ethics, vol 5, Springer, New York, NY, pp 151–165. https://doi.org/10.1007/978-3-642-31674-6_11, http://www.springerlink.com/content/hg712w4l23523xw5
Bringsjord S, Govindarajulu NS, Thero D, Si M (2014) Akratic robots and the computational logic thereof. In: Proceedings of ETHICS\(\bullet \) 2014 (2014 IEEE symposium on ethics in engineering, science, and technology), Chicago, IL, pp 22–29. IEEE Catalog Number: CFP14ETI-POD
Cushman F, Young L, Hauser M (2006) The role of conscious reasoning and intuition in moral judgment testing three principles of harm. Psychol Sci 17(12):1082–1089
Ebbinghaus HD, Flum J, Thomas W (1994) Mathematical logic, 2nd edn. Springer-Verlag, New York, NY
Francez N, Dyckhoff R (2010) Proof-theoretic semantics for a natural language fragment. Linguist Philos 33:447–477
Gentzen G (1935) Investigations into Logical Deduction. In: Szabo ME (ed) The collected papers of Gerhard Gentzen, North-Holland, Amsterday, The Netherlands, pp 68–131, This is an English version of the well-known 1935 German version
Govindarajulu NS, Bringsjord S (2017) On automating the doctrine of double effect. In: Proceedings of the twenty-sixth international joint conference on artificial intelligence (IJCAI 2017), Melbourne, Australia, preprint available at this https://arxiv.org/abs/1703.08922
Hauser M, Cushman F, Young L, Kang-Xing Jin R, Mikhail J (2007) A dissociation between moral judgments and justifications. Mind Lang 22(1):1–21
Huxtable R (2004) Get out of jail free? the doctrine of double effect in english law. Palliat Med 18(1):62–68
Kamm FM (2007) Intricate ethics: rights, responsibilities. Oxford University Press, New York, New York and Permissible Harm
Malle BF, Scheutz M, Arnold T, Voiklis J, Cusimano C (2015) Sacrifice one for the good of many?: people apply different moral norms to human and robot agents. In: Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction, ACM, Portland, USA, pp 117–124
McNamara P (2014) Deontic logic. In: Zalta EN (ed) The stanford encyclopedia of philosophy, winter, 2014th edn. Stanford University, Metaphysics Research Lab
Mueller E (2006) Commonsense reasoning: an event calculus based approach. Morgan Kaufmann, San Francisco, CA, This is the first edition of the book. The second edition was published in 2014
Pereira LM, Saptawijaya A (2016) Counterfactuals, logic programming and agent morality. In: Rahman S, Redmond J (eds) Logic. Springer, Argumentation and Reasoning, pp 85–99
Pollock J (1976) Subjunctive Reasoning. D. Reidel, Dordrecht, Holland & Boston, USA
Rao AS, Georgeff MP (1991) Modeling rational agents within a BDI-architecture. In: Fikes R, Sandewall E (eds) Proceedings of knowledge representation and reasoning (KR&R-91), Morgan Kaufmann, San Mateo, CA, pp 473–484
Sachdeva S, Iliev R, Ekhtiari H, Dehghani M (2015) The role of self-sacrifice in moral dilemmas. PloS one 10(6):e0127,409
Acknowledgements
The research described above has been in no small part enabled by generous support from ONR (morally competent machines and the cognitive calculi upon which they are based) and AFOSR (unprecedentedly high computational intelligence achieved via automated reasoning), and we are deeply grateful for this funding.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Govindarajulu, N.S., Bringsjord, S., Ghosh, R., Peveler, M. (2019). Beyond the Doctrine of Double Effect: A Formal Model of True Self-sacrifice. In: Aldinhas Ferreira, M., Silva Sequeira, J., Singh Virk, G., Tokhi, M., E. Kadar, E. (eds) Robotics and Well-Being. Intelligent Systems, Control and Automation: Science and Engineering, vol 95. Springer, Cham. https://doi.org/10.1007/978-3-030-12524-0_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-12524-0_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-12523-3
Online ISBN: 978-3-030-12524-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)