Skip to main content

Beyond the Doctrine of Double Effect: A Formal Model of True Self-sacrifice

  • Chapter
  • First Online:
Robotics and Well-Being

Abstract

The doctrine of double effect (\(\mathcal {{DDE}}\)) is an ethical principle that can account for human judgment in moral dilemmas: situations in which all available options have large good and bad consequences. We have previously formalized \(\mathcal {{DDE}}\) in a computational logic that can be implemented in robots. \(\mathcal {{DDE}}\), as an ethical principle for robots, is attractive for a number of reasons: (1) Empirical studies have found that \(\mathcal {{DDE}}\) is used by untrained humans; (2) many legal systems use \(\mathcal {{DDE}}\); and finally, (3) the doctrine is a hybrid of the two major opposing families of ethical theories (consequentialist/utilitarian theories versus deontological theories). In spite of all its attractive features, we have found that \(\mathcal {{DDE}}\) does not fully account for human behavior in many ethically challenging situations. Specifically, standard \(\mathcal {{DDE}}\) fails in situations wherein humans have the option of self-sacrifice. Accordingly, we present an enhancement of our \(\mathcal {{DDE}}\)-formalism to handle self-sacrifice; we end by looking ahead to future work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Full formalization of \(\mathcal {{DDE}}\) would include conditions expressing the requirement that the agent in question has certain emotions and lacks certain other emotions (e.g., the agent cannot have delectatio morosa). On the strength of Ghosh’s Felmë theory of emotion, which formalizes (apparently all) human emotions in the language of cognitive calculus as described in the present paper, we are actively working in this direction.

  2. 2.

    The blue/red terminology is common in wargaming and offers in the minds of many a somewhat neutral way to talk about politically charged situations.

  3. 3.

    We leave out the counterfactual condition \(\mathbf {C}_5\) as it is typically excluded in standard treatments of \(\mathcal {{DDE}}\).

  4. 4.

    Technically, in the inaugural [2, 3], the straight event calculus is not used, but is enhanced, and imbedded within common knowledge, the operator for which is C.

  5. 5.

    A overview of this list is given lucidly in [16].

  6. 6.

    Placing limits on the layers of any intensional operators is easily regimented. See [2, 3].

  7. 7.

    More precisely, we allow such formulae to be interpreted in this way. Strictly speaking, even the “meaning” of a material conditional such as \((\phi \wedge \psi ) \rightarrow \psi \), in our proof-theoretic orientation, is true because this conditional can be proved to hold in “background logic.” Readers interested in how background logic appears on the scene immediately when mathematical (extensional deductive) logic is introduced are encouraged to consult [8].

  8. 8.

    The prover is available in both Java and Common Lisp and can be obtained at: https://github.com/naveensundarg/prover. The underlying first-order prover is SNARK, available at: http://www.ai.sri.com/~stickel/snark.html.

  9. 9.

    The definition of \(\rhd \) is inspired by Pollock’s [19] treatment, and while similarities can be found to the approach in [18], we note that this definition requires at least first-order logic.

  10. 10.

    The code is available at https://goo.gl/JDWzi6. For further experimentation with and exploration of \(\mathcal {{DDE}}\), we are working on physical, 3D simulations, rather than only virtual simulations in pure software. Space constraints make it impossible to describe the “cognitive polysolid framework” in question (which can be used for simple trolley problems), development of which is currently principally the task of Matt Peveler.

References

  1. Allsopp ME (2011) The doctrine of double effect in US law: exploring neil gorsuch’s analyses. Natl Cathol Bioeth Q 11(1):31–40

    Article  Google Scholar 

  2. Arkoudas K, Bringsjord S (2008) Toward formalizing common-sense psychology: an analysis of the false-belief task. In: Ho TB, Zhou ZH (eds) Proceedings of the tenth pacific rim international conference on artificial intelligence (PRICAI 2008), Springer-Verlag, no. 5351 in Lecture Notes in Artificial Intelligence (LNAI), pp 17–29. http://kryten.mm.rpi.edu/KA_SB_PRICAI08_AI_off.pdf

    Chapter  Google Scholar 

  3. Arkoudas K, Bringsjord S (2009) Propositional attitudes and causation. Int J Softw Inform 3(1):47–65. http://kryten.mm.rpi.edu/PRICAI_w_sequentcalc_041709.pdf

  4. Bringsjord S (2017) A 21st-century ethical hierarchy for robots and persons: \(\cal{EH}\). In: A world with robots: international conference on robot ethics: ICRE 2015, Springer, Lisbon, Portugal, vol 84, p 47

    Chapter  Google Scholar 

  5. Bringsjord S, Govindarajulu NS (2013) Toward a modern geography of minds, machines, and math. In: Müller VC (ed) Philosophy and theory of artificial intelligence, studies in applied philosophy, epistemology and rational ethics, vol 5, Springer, New York, NY, pp 151–165. https://doi.org/10.1007/978-3-642-31674-6_11, http://www.springerlink.com/content/hg712w4l23523xw5

    Google Scholar 

  6. Bringsjord S, Govindarajulu NS, Thero D, Si M (2014) Akratic robots and the computational logic thereof. In: Proceedings of ETHICS\(\bullet \) 2014 (2014 IEEE symposium on ethics in engineering, science, and technology), Chicago, IL, pp 22–29. IEEE Catalog Number: CFP14ETI-POD

    Google Scholar 

  7. Cushman F, Young L, Hauser M (2006) The role of conscious reasoning and intuition in moral judgment testing three principles of harm. Psychol Sci 17(12):1082–1089

    Article  Google Scholar 

  8. Ebbinghaus HD, Flum J, Thomas W (1994) Mathematical logic, 2nd edn. Springer-Verlag, New York, NY

    Book  Google Scholar 

  9. Francez N, Dyckhoff R (2010) Proof-theoretic semantics for a natural language fragment. Linguist Philos 33:447–477

    Article  Google Scholar 

  10. Gentzen G (1935) Investigations into Logical Deduction. In: Szabo ME (ed) The collected papers of Gerhard Gentzen, North-Holland, Amsterday, The Netherlands, pp 68–131, This is an English version of the well-known 1935 German version

    Google Scholar 

  11. Govindarajulu NS, Bringsjord S (2017) On automating the doctrine of double effect. In: Proceedings of the twenty-sixth international joint conference on artificial intelligence (IJCAI 2017), Melbourne, Australia, preprint available at this https://arxiv.org/abs/1703.08922

  12. Hauser M, Cushman F, Young L, Kang-Xing Jin R, Mikhail J (2007) A dissociation between moral judgments and justifications. Mind Lang 22(1):1–21

    Article  Google Scholar 

  13. Huxtable R (2004) Get out of jail free? the doctrine of double effect in english law. Palliat Med 18(1):62–68

    Article  Google Scholar 

  14. Kamm FM (2007) Intricate ethics: rights, responsibilities. Oxford University Press, New York, New York and Permissible Harm

    Book  Google Scholar 

  15. Malle BF, Scheutz M, Arnold T, Voiklis J, Cusimano C (2015) Sacrifice one for the good of many?: people apply different moral norms to human and robot agents. In: Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction, ACM, Portland, USA, pp 117–124

    Google Scholar 

  16. McNamara P (2014) Deontic logic. In: Zalta EN (ed) The stanford encyclopedia of philosophy, winter, 2014th edn. Stanford University, Metaphysics Research Lab

    Google Scholar 

  17. Mueller E (2006) Commonsense reasoning: an event calculus based approach. Morgan Kaufmann, San Francisco, CA, This is the first edition of the book. The second edition was published in 2014

    Google Scholar 

  18. Pereira LM, Saptawijaya A (2016) Counterfactuals, logic programming and agent morality. In: Rahman S, Redmond J (eds) Logic. Springer, Argumentation and Reasoning, pp 85–99

    Google Scholar 

  19. Pollock J (1976) Subjunctive Reasoning. D. Reidel, Dordrecht, Holland & Boston, USA

    Google Scholar 

  20. Rao AS, Georgeff MP (1991) Modeling rational agents within a BDI-architecture. In: Fikes R, Sandewall E (eds) Proceedings of knowledge representation and reasoning (KR&R-91), Morgan Kaufmann, San Mateo, CA, pp 473–484

    Google Scholar 

  21. Sachdeva S, Iliev R, Ekhtiari H, Dehghani M (2015) The role of self-sacrifice in moral dilemmas. PloS one 10(6):e0127,409

    Article  Google Scholar 

Download references

Acknowledgements

The research described above has been in no small part enabled by generous support from ONR (morally competent machines and the cognitive calculi upon which they are based) and AFOSR (unprecedentedly high computational intelligence achieved via automated reasoning), and we are deeply grateful for this funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Naveen Sundar Govindarajulu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Govindarajulu, N.S., Bringsjord, S., Ghosh, R., Peveler, M. (2019). Beyond the Doctrine of Double Effect: A Formal Model of True Self-sacrifice. In: Aldinhas Ferreira, M., Silva Sequeira, J., Singh Virk, G., Tokhi, M., E. Kadar, E. (eds) Robotics and Well-Being. Intelligent Systems, Control and Automation: Science and Engineering, vol 95. Springer, Cham. https://doi.org/10.1007/978-3-030-12524-0_5

Download citation

Publish with us

Policies and ethics