Skip to main content

A Typology of Liability Rules for Robot Harms

  • Chapter
  • First Online:
A World with Robots

Part of the book series: Intelligent Systems, Control and Automation: Science and Engineering ((ISCA,volume 84))

Abstract

This paper considers non-contractual liability for harms caused by (artificially) intelligent systems. It provides a typology of different ways to approach the liability issue, exemplified by some new technologies that have been, or are about to be, introduced into human society. The paper argues that the traditional robot-as-tool perspective should be maintained, but warns that this might not be possible unless we develop corresponding technologies for efficient responsibility tracking. Specifically, new techniques need to be developed, at the intersection between computer science and law, to support reasoning about the liability implications when autonomous technologies interact with their environment and cause harms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See, generally, Cerka et al. (2015); Hallevy (2015); Vladeck (2014); Allain (2013); Gurney (2013).

  2. 2.

    See (Palmerini et al. 2014, 175).

  3. 3.

    See, generally, Howells and Owen (2010).

  4. 4.

    See Howells and Owen (2010, 241).

  5. 5.

    The benefit of using robots to perform surgery are obvious; robot hands are steady and precise, more so than the hands of human surgeons. In addition, the da Vinci system makes it possible for human doctors to perform surgery from afar, so they can help people without being physically present. The system has proved successful, and so far it has been deployed to around 2500 hospitals around the world. See Kirkpatrick (2014, 14).

  6. 6.

    Apparently, by October 2012 around 3000 claims had been submitted, see Moylan (2014).

  7. 7.

    Mracek v Bryn Mawr Hospital, 610 F Supp 2d 401 (ED Pa 2009), aff’d, 363 F App’x 925 (3d Cir 2010).

  8. 8.

    For a more in-depth discussion of the case, see Pagallo (2013, 91–95) and Goldberg (2012).

  9. 9.

    This is particularly likely to become a problem when, as in the case of da Vinci, the manufacturer is a de facto knowledge monopolist, with access to privileged information about how the system actually works. See Goldberg (2012, 249) (“Because Intuitive has a monopoly, as well as limited expert witnesses, manipulation of an industry standard by such a dominant player is likely in a situation where they would be protecting themselves in a products liability lawsuit.”). This also points to the important interactions between liability rules and intellectual property rights, a connection that could provide a strong incentive for large technology firms to favour an approach based on insurance payments and new forms of strict liability. In this way, these firms might hope to avoid in-depth public scrutiny of their technology, both its merits and its shortcomings.

  10. 10.

    Following this, the collective as such can be held liable, either proportionally to their causal contribution or else in solidium (joint liability). See, e.g., Wright (1992).

  11. 11.

    See, e.g., Hamdani and Klement (2008).

  12. 12.

    See generally Allain (2013).

  13. 13.

    Allain (2013, 1077).

  14. 14.

    Allain (2013, 1075).

  15. 15.

    Allain refers to his status as being that of a “quasi-legal person”. See Allain (2013).

  16. 16.

    This should be contrasted with doctrines of strict liability, where other notions act as a substitute for culpa, e.g., the notion of a defect in products liability law. See, e.g., Calabresi and Hirschoff (1972, 1055–1056). From a conceptual and socio-legal perspective, it should also be kept in mind that causation itself might be used as a partly normative notion, see, generally, Hitchcock and Knobe (2009).

  17. 17.

    See, generally, Vladeck (2014).

  18. 18.

    Vladeck (2014, 146).

  19. 19.

    Vladeck (2014, 149).

  20. 20.

    Vladeck argues that this is close to the legal doctrine of res ipsa loquitur (going as far as to state that it is merely a “restatement” of it). See Vladeck (2014, 128). I do not agree. According to Vladeck’s proposal, liability will be inferred even in the complete absence of circumstantial/statistical evidence to suggest a design flaw—in fact, it might be inferred even in the presence of conclusive evidence suggesting that the autonomous car is a safer driver than most humans. The proposal therefore appears very different from the original doctrine of res ipsa loquitur, which is a principle used to infer negligence from circumstantial evidence. See, generally, Johnson (1997).

  21. 21.

    Vladeck suggests that a vicarious liability system should be built on this basis, so that human actors can be held accountable for the agency of the cars even if they are entirely blameless. Specifically, he proposes an enterprise model involving not only the manufacturer but also the designers of the AI software and other technology suppliers, see Vladeck (2014, 148–149).

  22. 22.

    Vladeck (2014, 145–149).

  23. 23.

    Vladeck (2014, 150).

  24. 24.

    See, e.g., Chopra and White (2011, 189–191); Koops et al. (2010, 560–561) (“The majority view in the literature is that sooner or later, limited legal personhood with strict liability is a good solution for solving the accountability gap, particularly in contracting, and for electronic agents, this may be sooner rather than later”). For a bolder approach, arguing that notions of criminal liability should apply to artificial intelligences, see Hallevy (2013, 177–178).

  25. 25.

    See, generally, Rosen (2011).

  26. 26.

    See, e.g., Bachmann (2013).

  27. 27.

    See Strawser (2010); Arkin (2010).

  28. 28.

    See Arkin (2010).

  29. 29.

    See, generally, Sharkey (2010).

  30. 30.

    See generally Keller (2012).

  31. 31.

    Keller (2012).

  32. 32.

    Keller (2012).

  33. 33.

    See Keller (2012).

  34. 34.

    Two famous examples are the Italy 2003 electricity outage that left virtually all of Italy without electricity, and the 2011 Southwest blackout, which affected more than 7 million people in the US. See, generally, Buldyrev et al. (2010).

  35. 35.

    See Goodley (2015). In my opinion, this approach to liability for the flash crash is highly problematic, particularly as the damage created by cascades will tend to be completely out of proportion to any mistakes or even bad intentions that originally set them in motion.

  36. 36.

    See Smith (2014).

  37. 37.

    See Hope (2015).

  38. 38.

    The REINS project at Utrecht University, with which I am affiliated, aims to make a contribution in this regard, see Broersen (2014).

  39. 39.

    There is interesting work being carried out in this direction, but it remains rather embryonic, see, e.g., Dennis et al. (2015).

References

  • Allain JA (2013) From Jeopardy! to jaundice: the medical liability implications of Dr. Watson and other artificial intelligence systems. La Law Rev 73:1049

    Google Scholar 

  • Arkin RC (2010) The case for ethical autonomy in unmanned systems. J Mil Ethics 9(4):332–341

    Article  Google Scholar 

  • Bachmann SD (2013) Targeted killings: contemporary challenges, risks and opportunities. J Confl Secur Law 18(2):259

    Article  Google Scholar 

  • Broersen J (2014) Responsible intelligent systems. KI - Künstliche Intelligenz 28(3):209–214

    Article  Google Scholar 

  • Buldyrev S, Parshani R, Paul G, Stanley H, Havlin S (2010) Catastrophic cascade of failures in interdependent networks. Nature 464(7291):1025–1028

    Article  Google Scholar 

  • Calabresi G, Hirschoff JT (1972) Toward a test for strict liability in torts. Yale Law J 81(6):1055–1085

    Article  Google Scholar 

  • Cerka P, Grigiene J, Sirbikyte G (2015) Liability for damages caused by artificial intelligence. Comput Law Secur Rev 31(3):376–389

    Article  Google Scholar 

  • Chopra S, White LF (2011) Legal theory for autonomous artificial agents. University of Michigan Press

    Google Scholar 

  • Dennis LA, Fisher M, Winfield AFT (2015) Towards verifiably ethical robot behaviour. AAAI-15 Workshop on AI and Ethics (to appear)

    Google Scholar 

  • Goldberg M (2012) The robotic arm went crazy! The problem of establishing liability in a monopolized field. Rutgers Comput Technol Law J 38(2):225

    MathSciNet  Google Scholar 

  • Goodley S (2015) ‘Flash crash trader’ Navinder Singh Sarao loses bail appeal. The Guardian Available at http://www.theguardian.com. Accessed 31 Mar 2016

  • Gurney JK (2013) Sue my car not me: products liability and accidents involving autonomous vehicles. Univ Ill J Law Technol Policy 2013(2):247–277

    Google Scholar 

  • Hallevy G (2013) When Robots Kill: artificial intelligence under criminal law. Northeastern University Press

    Google Scholar 

  • Hallevy G (2015) Liability for Crimes Involving Artificial Intelligence Systems. Springer

    Google Scholar 

  • Hamdani A, Klement A (2008) Corporate crime and deterrence. Stanford Law Rev 61(2):271–310

    Google Scholar 

  • Hitchcock C, Knobe J (2009) Cause and norm. J Philos 106(11):587–612

    Article  Google Scholar 

  • Hope B (2015) Lawsuit against exchanges over ‘unfair advantage’ for high-frequency traders dismissed. Wall Street J. http://www.wsj.com. Accessed 31 Mar 2016

  • Howells G, Owen DG (2010) Products liability law in America and Europe. In: Howells G, Ramsay I, Wilhelmsson T, Kraft D (eds) Handbook of research on international consumer law. Edward Elgar Publishing, chap 9, pp 224–256

    Google Scholar 

  • Johnson MR (1997) Rolling the “barrel” a little further: allowing res ipsa loquitur to assist in proving strict liability in tort manufacturing defects. William Mary Law Rev 38(3):1197–1255

    Google Scholar 

  • Keller AJ (2012) Robocops: regulating high frequency trading after the flash crash of 2010. Ohio State Law J 73(6):1457

    Google Scholar 

  • Kirkpatrick K (2014) Surgical robots deliver care more precisely. Commun ACM 57(8):14

    Article  Google Scholar 

  • Koops BJ, Hildebrandt M, Jaquet-Chiffelle DO (2010) Bridging the accountability gap: rights for new entities in the information society? Minn J Law Sci Technol 11:497

    Google Scholar 

  • Moylan T (2014) Da vinci surgical robot maker reserves $67M to settle product liability claims. https://www.lexisnexis.com/legalnewsroom. Accessed 31 Mar 2016

  • Pagallo U (2013) The laws of robots: crimes, contracts, and torts. Springer

    Google Scholar 

  • Palmerini E, Azzarri F, Battaglia F, Bertolini A, Carnevale A, Carpaneto J, Cavallo F, Carlo AD, Cempini M, Controzzi M, Koops BJ, Lucivero F, Mukerji N, Nocco L, Pirni A, Shah H, Salvini P, Schellekens M, Warwick K (2014) Robolaw, guidelines on regulating robotics. http://www.robolaw.eu/RoboLaw_files/documents/robolaw_d6.2_guidelinesregulatingrobotics_20140922.pdf. Accessed 31 Mar 2016

  • Rosen RD (2011) Drones and the U.S. courts. William Mitchell Law Rev 37:5280

    Google Scholar 

  • Sharkey N (2010) Saying ‘no!’ to lethal autonomous targeting. J Mil Ethics 9(4):369–383

    Article  Google Scholar 

  • Smith A (2014) Fast money: the battle against the high frequency traders. The Guardian. http://www.theguardian.com. Accessed 31 Mar 2016

  • Strawser BJ (2010) Moral predators: the duty to employ uninhabited aerial vehicles. J Mil Ethics 9(4):342–368

    Article  Google Scholar 

  • Vladeck DC (2014) Machines without principals: liability rules and artificial intelligence. Wash Law Rev 89:117

    Google Scholar 

  • Wright RW (1992) The logic and fairness of joint and several liability. Memphis State Univ Law Rev 23(1):45–84

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sjur Dyrkolbotn .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Dyrkolbotn, S. (2017). A Typology of Liability Rules for Robot Harms. In: Aldinhas Ferreira, M., Silva Sequeira, J., Tokhi, M., E. Kadar, E., Virk, G. (eds) A World with Robots. Intelligent Systems, Control and Automation: Science and Engineering, vol 84. Springer, Cham. https://doi.org/10.1007/978-3-319-46667-5_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46667-5_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46665-1

  • Online ISBN: 978-3-319-46667-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics