, Volume 34, Issue 1, pp 129–136 | Cite as

The rise of the robots and the crisis of moral patiency

  • John DanaherEmail author
Open Forum


This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, and thereby reduce them to moral patients. Since that ability and willingness is central to the value system in modern liberal democratic states, the crisis of moral patiency has a broad civilization-level significance: it threatens something that is foundational to and presupposed in much contemporary moral and political discourse. I defend this argument in three parts. I start with a brief analysis of an analogous argument made (or implied) in pop culture. Though those arguments turn out to be hyperbolic and satirical, they do prove instructive as they illustrates a way in which the rise of robots could impact upon civilization, even when the robots themselves are neither malicious nor powerful enough to bring about our doom. I then introduce the argument from the crisis of moral patiency, defend its main premises and address objections.


Robotics Artificial intelligence Technological unemployment Algocracy Moral agents Moral patients 


  1. Avent R (2016) The wealth of humans. St Martin’s Press, LondonGoogle Scholar
  2. Bhuta N, Beck S, Geiβ R, Liu H-Y, Kreβ C (2016) Autonomous weapons systems: law, ethics, policy. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  3. Bostrom N (2014) Superintelligence: paths, dangers, strategies. OUP, OxfordGoogle Scholar
  4. Brynjolfsson E, McAfee A (2014) The second machine age. WW Norton, New YorkGoogle Scholar
  5. Calo R, Froomkin M, Kerr I (2016) Robot law. Edward Elgar Publishing, CheltenhamCrossRefGoogle Scholar
  6. Carr N (2014) The glass cage. The Bodley Head, LondonGoogle Scholar
  7. Danaher J (2014) Sex work, technological unemployment and the basic income guarantee. J Evol Technol 24(1):113–130MathSciNetGoogle Scholar
  8. Danaher J (2017) Robotic rape and robotic child sexual abuse. Crim Law Philos 11(1):71–95CrossRefGoogle Scholar
  9. Dormehl L (2014) The formula: how algorithms solve all our problems…and create more. Perigree, New YorkGoogle Scholar
  10. Floridi L (1999) Information ethics: on the philosophical foundation of computer ethics. Ethics Inf Technol 1(1):37–56CrossRefGoogle Scholar
  11. Ford M (2015) The rise of the robots. Basic Books, New YorkGoogle Scholar
  12. Frey C, Osborne M (2013) The future of employment: how susceptible are jobs to computerisation? Oxford Martin School Working Paper, September 2013. Accessed 16 Nov 2017
  13. Griggs B (2014) Google’s new self-driving car has no steering wheel or brake. CNN 24 May 2014. Accessed 16 Nov 2017
  14. Gunkel D (2011) The machine question. MIT Press, CambridgeGoogle Scholar
  15. Hajdin M (1994) The Boundaries of moral discourse. Loyola University Press, ChicagoGoogle Scholar
  16. Peter F (2008) Pure epistemic proceduralism. Episteme 5(1):33–55CrossRefGoogle Scholar
  17. Susskind R, Susskind D (2015) The future of the professions. OUP, OxfordzbMATHGoogle Scholar
  18. Van de Voort M, Pieters W, Consoli L (2015) Refining the ethics of computer-made decisions: a classification of moral mediation by ubiquitous machines. Ethics Inf Technol 17(1):41–56CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2017

Authors and Affiliations

  1. 1.School of LawNUI GalwayGalwayIreland

Personalised recommendations