Skip to main content

Does Future Society Need Legal Personhood for Robots and AI?

  • Chapter
  • First Online:

Abstract

If artificial entities as autonomous robots will be sentient beings, will it be necessary to give robots and AI entities some legal capacity comparable with legal personhood in a society that will be interacting with robotics and AI appliances? Must they have an understanding of legal consequences of their actions? In this chapter, this question is considered by analyzing the future capacities and functions of robots and AI systems and the rights and duties of existing legal subjects, natural persons, and (artificial) legal persons such as corporations and states. The question is posed if AI will have a capacity to be sentient as natural persons and—maybe— other living beings or will AI always be comparable with the subject in the Chinese room experiment? Therefore the relevance of free will, intelligence, and consciousness of natural persons to acquire legal personhood are analyzed and compared with other beings, animals, and future sentient AI entities. The hesitance to give legal personhood to AI is also influenced by the human conviction that this would increase the risk to lose control and a “robot uprising.” Man, as always, is afraid of technology getting out of hand and is convinced of their own superiority and therefore always wants to stay in control. Question is if there always has to be a natural person in the loop. In that light the need for a certain legal personhood in a future legal framework, considering civil liability and even criminal liability, is discussed as it is also subjected to considerations as proposed by a resolution of the European Parliament, eventually leading to proposals in European policy and law.

I am the eye in the sky

Looking at you

I can read your mind

I am the maker of rules

Dealing with fools

I can cheat you blind

And I don’t need to see any more

To know that

I can read your mind, I can read your mind

Alan Parsons Project: Eye in the Sky

“Personhood” can be read as “legal personality”.

This chapter has been based on former articles, insights and presentations by the author. The terms “robot” and “AI entity” are used interchangeably.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://spectrum.ieee.org/the-human-os/biomedical/devices/in-fleshcutting-task-autonomous-robot-surgeon-beats-human-surgeons

  2. 2.

    Warwick K., et al. (2004). “Thought Communication and Control: A First Step Using Radiotelegraphy.” IEE Proceedings on Communications 151(3):185–189.

  3. 3.

    https://waitbutwhy.com/2017/04/neuralink.html

  4. 4.

    i.e.: https://www.nature.com/news/worldwide-brain-mapping-project-sparks-excitement-and-concern-1.20658; and, https://www.humanbrainproject.eu/en/

  5. 5.

    https://bit.ly/2xfMToe

  6. 6.

    https://futureoflife.org/ai-principles/

  7. 7.

    ibidem.

  8. 8.

    The acceleration of technological progress has been the central feature of this century. We are on the edge of changes comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater [intellectual capacity] than human intelligence. See Vinge [1].

  9. 9.

    a. A legal status for a robot can’t derive from the Natural Person model, (…) since the robot would then hold human rights, such as the right to dignity, the right to its integrity, the right to remuneration or the right to citizenship, thus directly confronting the Human rights. This would be in contradiction with the Charter of Fundamental Rights of the European Union and the Convention for the Protection of Human Rights and Fundamental Freedoms, https://bit.ly/2xfMToe

  10. 10.

    Clarke [2], p. 14.

  11. 11.

    Titcomb (2017) AI is the biggest risk we face as a civilization, Elon Musk says. Available at: http://www.telegraph.co.uk/technology/2017/07/17/ai-biggest-risk-face-civilisation-elon-musk-says/. Accessed 11 October 2017.

  12. 12.

    Lauren Burkhart citing Miller and Bennett [3]; Burkhart [4].

  13. 13.

    Tjong Tjin Tai [5], p. 248.

  14. 14.

    Geldart [6], p. 94.

  15. 15.

    Richards and King [7], p. 394.

  16. 16.

    https://www.parliament.uk/documents/lords-committees/Artificial-Intelligence/AI-Written-Evidence-Volume.pdf

  17. 17.

    https://www.parliament.uk/documents/lords-committees/Artificial-Intelligence/AI-Government-Response.pdf

  18. 18.

    Berriat Saint-Prix [8].

  19. 19.

    Whereas it is of vital importance for the legislature to consider all legal implications. All the more now that humankind stands on the threshold of an era in which ever more sophisticated robots, bots, androids, and other manifestations of AI seem poised to unleash a new industrial revolution that is likely to leave no stratum of society untouched; report Delvaux with recommendations to the Commission on Civil Law Rules on Robotics [9]; European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL))<A8-0005/2017>.

  20. 20.

    Bertolini [10], p. 219. Compare also definition by “robotpark”: “A robot is a mechanical or virtual artificial agent (called “Bot”), usually an electro-mechanical machine that is guided by a computer program or electronic circuitry. Robots can be autonomous, semi-autonomous or remotely controlled and range from humanoids such as ASIMO and TOPIO, to nanorobots, “swarm” robots and industrial robots. A robot may convey a sense of intelligence or thought of its own.”

  21. 21.

    Bertolini [10].

  22. 22.

    Solum [11], pp. 1238–1239.

  23. 23.

    Dutch Civil Code (Burgerlijk Wetboek, BW), Book 2, Article 1 and 2.

  24. 24.

    Ohlin [12], p. 210.

  25. 25.

    https://plato.stanford.edu/entries/epiphenomenalism/

  26. 26.

    American law was inconsistent in its constitution of the personality of slaves. While they were denied many of the rights of “persons” or “citizens,” they were still held responsible for their crimes which meant that they were persons to the extent that they were criminally accountable. The variable status of American slaves is discussed in Fagundes [13]; Naffine [14], p. 346.

  27. 27.

    Maximilian Koessler, The Person in Imagination or Persona Ficta of the Corporation, 9 La. L. Rev. (1949).

  28. 28.

    Hutchinson [15].

  29. 29.

    Naffine [14], p. 346.

  30. 30.

    Brownlie [16], p. 58.

  31. 31.

    Crawford [17], p. 17.

  32. 32.

    “All that can be said is that an entity of a type recognized by customary law as capable of possessing rights and duties and of bringing and being subjected to international claims is a legal person. If the latter condition is not satisfied, the entity concerned may have legal personality of a very restricted kind, dependent on the agreement or acquiescence of recognized legal persons and opposable on the international plane only to those agreeing or acquiescent.” Crawford [17], p. 117.

  33. 33.

    Naffine [14].

  34. 34.

    Hobbes [18].

  35. 35.

    The word “person” is Latin, instead whereof the Greeks have “prosopon,” which signifies the face, as “persona” in Latin signifies the disguise, or outward appearance of a man, counterfeited on the stage; and sometimes more particularly that part of it which disguiseth the face, as a mask or vizard: and from the stage hath been translated to any representer of speech and action, as well in tribunals as theatres. Text Hobbes [18].

  36. 36.

    In his commentary on Digestum Novum (48, 19; ed. 1996), Bartolus reckons that an artificial person is not really a person and, still, this fiction stands in the name of the truth, so that we, the jurists, establish it: “universitas proprie non est persona; tamen hoc est fictum pro vero, sicut ponimus nos iuristae.” This idea triumphs with legal positivism and formalism in the mid-nineteenth century. In the System of Modern Roman Law (1840–1849) ed. (1979), Friedrich August von Savigny claims that ``only human fellows properly have rights and duties of their own, even though it is in the power of the law to grant such rights of personhood to anything, e.g., business corporations, governments, ships in maritime law, and so forth.” The same line of thought is stated in Pagallo [19], p. 156.

  37. 37.

    Aristotle, de Anima.

  38. 38.

    https://plato.stanford.edu/entries/ancient-soul/

  39. 39.

    https://digitalcommons.law.lsu.edu/cgi/viewcontent.cgi?article=1615&context=lalrev

  40. 40.

    Bodin [20].

  41. 41.

    Hamzelou (2016) Exclusive: World’s first baby born with new “3 parent” technique. Available at: https://www.newscientist.com/article/2107219-exclusive-worlds-first-baby-born-with-new-3-parent-technique/. Accessed 11 October 2017.

  42. 42.

    Geldart [6], p. 94; Dewey [21], p. 655.

  43. 43.

    Brainternet works by converting electroencephalogram (EEG) signals (brain waves) in an open source brain live stream. Minors [22] Can you read my mind? Available at: https://www.wits.ac.za/news/latest-news/research-news/2017/2017-09/can-you-read-my-mind. Accessed 11 October 2017.

  44. 44.

    Proposal Ira Winds, Livable Rotterdam alderman.

  45. 45.

    Aziz and Hussain (2014) Qatar’s Showcase of Shame. Available at: https://www.nytimes.com/2014/01/06/opinion/qatars-showcase-of-shame.html?_r=0. Accessed 12 October 2017; The Global Slavery Index [23] https://www.globalslaveryindex.org/findings/. Accessed 12 October 2017.

  46. 46.

    On June 14, 1956, the House settled the bill by Minister JC Furnace, so that married women were legally competent as from January 1, 1957.

  47. 47.

    Descartes [24].

  48. 48.

    Shaun Nichols, Is free will an illusion? https://www.scientificamerican.com/article/is-free-will-an-illusion/

  49. 49.

    Gardner [25].

  50. 50.

    Wechsler [26].

  51. 51.

    The Turing Test published by Alan Turing [27] was designed to providence a satisfactory operational definition of intelligence. Turing defined intelligent behavior as the ability to achieve human-level performance tasks, sufficient to fool an interrogator.

  52. 52.

    State of New York, Supreme Court, Appellate Division Third Judicial Department. Decided and Entered: December 4, 2014 (518336). Available at: http://decisions.courts.state.ny.us/ad3/Decisions/2014/518336.pdf. Accessed 20 October 2017.

  53. 53.

    Ibidem, p. 6.

  54. 54.

    Ibidem, p. 5: Amadio v Levin, 509 Pa 199, 225, 501 A2d 1085, 1098 [1985, Zappala, J., concurring] [noting that “‘[p]ersonhood’ as a legal concept arises not from the humanity of the subject but from the ascription of rights and duties to the subject”]).

  55. 55.

    The Nonhuman Rights Project (NhRP) further stated: chimps and other select species—bonobos, gorillas, orangutans, dolphins, orcas, and elephants—are not only conscious, but also possess a sense of self, and, to some degree, a theory of mind. They have intricate, fluid social relationships, which are influenced by strategy and the ability to plan ahead, as well as a sense of fairness and an empathetic drive to console and help one another. In many ways (though certainly not all), they are like young children. The NhRP contends, based on this, that chimpanzees are capable of bearing some duties and responsibilities.

  56. 56.

    Dutch Civil Code, Book 3, Article 2a.

  57. 57.

    Article 350 paragraph 2 of the Dutch Penal Code (Wetboek van Strafrecht) and Law of May 19, 2011, on an Integrated Framework for Regulations on Captive Animals and Related Topics (Animals Act).

  58. 58.

    Darling [28].

  59. 59.

    Maximilian Koessler, the person in imagination or persona ficta of the corporation p. 437, Louisiana law review, volume 9 number 4 May 1949 (https://digitalcommons.law.lsu.edu/cgi/viewcontent.cgi?article=1615&context=lalrev).

  60. 60.

    Dewey [21], p. 26.

  61. 61.

    Mayer [29].

  62. 62.

    Descartes, Discourse on Method and Meditations on First Philosophy, New Haven & London: Yale University Press. (1996), p. 3435.

  63. 63.

    Shoyama [30], p. 129.

  64. 64.

    Russell and Norvig [31], pp. 1 and 18; also referring to the following definition of AI: The act of creating machines that perform functions that require intelligence when performed by people. Kurzweil [32].

  65. 65.

    https://bit.ly/2tzJs6M

  66. 66.

    See http://www.ibm.com/watson/ and https://www.olcf.ornl.gov/olcf-resources/compute-systems/summit/

  67. 67.

    Examples are the molecular machines as designed by prof. Ben Feringa, Nobel laureate in 2016.

  68. 68.

    Bostrom [33].

  69. 69.

    Already in the 1960s this development was predicted: let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any person, however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of humans would be left far behind. Thus, the first ultra-intelligent machine is the last invention that humanity need ever make, provided that the machine is docile enough to tell us how to keep it under control. Good [34], cited by Vinge [1].

  70. 70.

    Reference to the Declaration of Amsterdam of the Council, of 14–15 April 2016, on cooperation in the field of connected and automated driving (“Amsterdam Declaration”).

  71. 71.

    Minors [22] Can you read my mind? Available at: https://www.wits.ac.za/news/latest-news/research-news/2017/2017-09/can-you-read-my-mind. Accessed 11 October 2017.

  72. 72.

    The Principles of European Tort Law (“PETL”) refers to liability for “auxiliaries” (6: 102)—an apt term for both robots, although in PETL it is meant particularly for people. Article 3: 201 of the Draft Common Frame of Reference (DCFR) of the Principles, Definitions and Model Rules of European Private Law refers to workers or “similarly engaged” others, in which the phrase “similarly engages” others may contain cases of accidental damage; see: Giliker [35], pp. 38 et seq. Then the robot will have to be seen as “another,” where the employer is liable under the condition that he still has “the least abstract possibility of directing and supervising its conduct through binding instructions”; Von Bar and Clive [36], pp. 34–55.

  73. 73.

    Schaerer et al. [37], pp. 72–77.

  74. 74.

    https://spectrum.ieee.org/biomedical/imaging/can-machines-be-conscious

  75. 75.

    http://www.kurzweilai.net/pdf/RayKurzweilReader.pdf, p. 91.

  76. 76.

    Based on Theories by Jeremy Bentham, An Introduction to the Principles of Moral and Legislation, 1789, London, and John Stuart Mill, Utilitarianism, 1861, London.

  77. 77.

    Defense Advanced Research Projects Agency (DARPA) is the wing of the U.S. Department of Defense which is responsible for developing emerging technologies for military use.

  78. 78.

    Also see: https://futurism.com/brain-based-circuitry-just-made-artificial-intelligence-faster/

  79. 79.

    Erven D onder de Linden en zoon [38], pp. 201–203.

  80. 80.

    Berriat Saint-Prix [8].

  81. 81.

    “If a bull gores a man or woman to death, the bull is to be stoned to death, and its meat must not be eaten. But the owner of the bull will not be held responsible.”

  82. 82.

    Kelly, Schaerer & Gomez, Liability in Robotics: An International Perspective on Robots as animals, paper Nevada University (https://bit.ly/2tkSkxU).

  83. 83.

    Naffine [14], p. 350.

  84. 84.

    Naffine [14], p. 350.

  85. 85.

    Naffine [14], p. 351.

  86. 86.

    Naffine [14], p. 351.

  87. 87.

    Naffine [14], p. 351.

  88. 88.

    Naffine [14], p. 353.

  89. 89.

    Naffine [14], p. 356.

  90. 90.

    Mori [39] The Uncanny Valley: The Original Essay by Masahiro Mori. Available at: https://spectrum.ieee.org/automaton/robotics/humanoids/the-uncanny-valley. Accessed 15 October 2017.

  91. 91.

    Naffine [14], p. 357.

  92. 92.

    Naffine [14].

  93. 93.

    Naffine [14].

  94. 94.

    Naffine [14].

  95. 95.

    Naffine [14], p. 358.

  96. 96.

    Solum [11], p. 1239.

  97. 97.

    Naffine [14], p. 362.

  98. 98.

    Naffine [14], p. 364.

  99. 99.

    Naffine [14], p. 364.

  100. 100.

    Naffine [14], p. 364.

  101. 101.

    Naffine [14], p. 365.

  102. 102.

    Naffine [14], p. 351.

  103. 103.

    Solum [11], p. 1239.

  104. 104.

    Solum [11], p. 1239.

  105. 105.

    Safi [40].

  106. 106.

    Solum [11], p. 1239.

  107. 107.

    Solum [11], p. 1239.

  108. 108.

    Solum [11], p. 1239.

  109. 109.

    Solum [11], p. 1239.

  110. 110.

    Lovejoy [41].

  111. 111.

    Solum [11], p. 1260.

  112. 112.

    Solum [11], p. 1261.

  113. 113.

    Naffine [14], p. 362.

  114. 114.

    See, e.g., the robot Sophia, of Hanson robotics, and (compare “Ava”: Bush, E. (Producer), & Garland, E. (Director). (2014). Ex machina [Motion Picture]. United States).

  115. 115.

    Solum [11], p. 1269.

  116. 116.

    Solum [11], p. 1264.

  117. 117.

    This project aims to build a biohybrid architecture, where natural and artificial neurons are linked and work together to replace damaged parts of the brain (https://ec.europa.eu/digital-single-market/en/news/artificial-neurons-replace-and-assist-damaged-parts-human-brain).

  118. 118.

    Solum [11], p. 1265.

  119. 119.

    Solum [11], p. 1266.

  120. 120.

    Solum [11], p. 1266.

  121. 121.

    Solum [11], p. 1269.

  122. 122.

    Naffine [14], p. 364.

  123. 123.

    Naffine [14], p. 364.

  124. 124.

    2018 US overview state legislation: http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx. Also: EU Common Approach on the liability rules and insurance related to the Connected and Autonomous Vehicle EP study (http://www.europarl.europa.eu/RegData/etudes/STUD/2018/615635/EPRS_STU(2018)615635_EN.pdf).

  125. 125.

    Pagallo [19], p. 3 (referring to Wiener [42]).

  126. 126.

    Voulon [43].

  127. 127.

    Raskin [44], p. 10 (citing Segrave [45]).

  128. 128.

    Ibidem, p. 10/11 (referring to Carlile [46]).

  129. 129.

    Ibidem.

  130. 130.

    Chopra and White [47], p. 130, correctly remark, “to apply the respondent superior doctrine to a particular situation would require the artificial agent in question to be one that has been understood by virtue of its responsibilities and its interactions with third parties as acting as a legal agent for its principal.” Pagallo [19], p. 132.

  131. 131.

    An obligatory insurance scheme, which could be based on the obligation of the producer to take out insurance for the autonomous robots it produces, should be established. The insurance system should be supplemented by a fund in order to ensure that damages can be compensated for in cases where no insurance cover exists. RR\1115573EN.docx, p. 20.

  132. 132.

    Ibidem.

  133. 133.

    Future of Life Institute [48] An Open Letter To The United Nations Convention On Certain Conventional Weapons. Available at: https://futureoflife.org/autonomous-weapons-open-letter-2017/. Accessed 21 August 2017.

  134. 134.

    Voulon [43], concluding his dissertation.

  135. 135.

    Section 102 (a) (27) Uniform Computer Information Transaction Act (UCITA).

  136. 136.

    Going back to Teubner’s analysis in the Rights of Nonhumans?, the entry of new actors on the legal scene concerns all the nuances of legal agenthood, such as “distinctions between different graduations of legal subjectivity, between mere interests, partial rights and full-fledged rights, between limited and full capacity for action, between agency, representation and trust, between individual, group, corporate and other forms of collective responsibility.” Pagallo [19], p. 153 (referring to Teubner [49]).

  137. 137.

    Hildebrandt and Gaakeer [50], p. 60.

  138. 138.

    Randal Koene (http://rak.minduploading.org/ and https://read.bi/2lKAMqS).

  139. 139.

    David C. Parkes and Michael P. Wellman, Economic reasoning and artificial intelligence, Science 17 July 2015: Vol. 349, Issue 6245, pp. 267–272. DOI: https://doi.org/10.1126/science.aaa8403.

  140. 140.

    See i.e. the “PREPARE” project (https://www.darpa.mil/news-events/2018-05-25).

  141. 141.

    See, e.g., ISO 13482: 2014 Specifies requirements and guidelines for the inherently safe design, protective measures, and information for use or personal care robots, in particular the following three types of personal care robots: mobile robot servant, physical assistant robot, and person carrier robot.

  142. 142.

    Human and robot system interaction in industrial settings is now possible thanks to ISO/TS 15066, a new ISO technical specification for collaborative robot system safety.

  143. 143.

    Hildebrandt and Gaakeer [50], p. 7.

  144. 144.

    Regulation (EU) 2016/679.

  145. 145.

    Interesting is the concluding recommendation of the Science and Technology Committee: “73. We recommend that a standing Commission on Artificial Intelligence be established, based at the Alan Turing Institute, to examine the social, ethical and legal implications of recent and potential developments in AI. It should focus on establishing principles to govern the development and application of AI techniques, as well as advising the Government of any regulation required on limits to its progression. It will need to be closely coordinated with the work of the Council of Data Ethics which the Government is currently setting up following the recommendation made in our Big Data Dilemma report.

    74. Membership of the Commission should be broad and include those with expertise in law, social science and philosophy, as well as computer scientists, natural scientists, mathematicians and engineers. Members drawn from industry, NGOs and the public, should also be included and a programme of wide ranging public dialogue instituted.” Available at: https://publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/14506.htm#_idTextAnchor014. Accessed 25 October 2017.

  146. 146.

    “Because AI can fundamentally impact a person’s life, moves should be undertaken to ensure that the transparency of AI programs is standard, particularly when AI is used to make a decision affecting people or impact how people live their lives. The public must always be fully aware of when they are subject to, or affected or impacted by a decision made by AI. Increased transparency and accountability of public-facing AI, including the methods behind the system, and the reasons for decisions, will not only benefit society as a whole in terms of open source information but will increase public trust and confidence and subsequently, public engagement with AI systems.” (https://www.parliament.uk/documents/lords-committees/Artificial-Intelligence/AI-Written-Evidence-Volume.pdf), p. 140.

  147. 147.

    https://www.law.ox.ac.uk/business-law-blog/blog/2017/04/rise-robots-and-law-humans

  148. 148.

    The proposed code of ethical conduct in the field of robotics will lay the groundwork for the identification, oversight, and compliance with fundamental ethical principles from the design and development phase. EP motion, PE582.443v03-00, p. 21.

  149. 149.

    https://bit.ly/2xfMToe

  150. 150.

    Harari [51], last sentences.

References

  1. Vinge V. The coming technological singularity: how to survive in the post-human era. NASA, Lewis Research Center, Vision 21. 1993. p. 11–22. Available at: https://edoras.sdsu.edu/~vinge/misc/singularity.html. Accessed 25 Oct 2017.

  2. Clarke AC. Profiles of the future: an inquiry into the limits of the possible. New York: Harper & Row; 1973.

    Google Scholar 

  3. Miller CA, Bennett I. Thinking longer term about technology: is there value in science fiction-inspired approaches to constructing futures? Sci Public Policy. 2008;35(8):597–606.

    Article  Google Scholar 

  4. Burkhart L. Symposium – governance of emerging technologies: law, policy, and ethics. Jurimetrics. 2016;56:219–22. Available at: https://www.americanbar.org/content/dam/aba/administrative/science_technology/2016/governance_in_emerging_technologies.authcheckdam.pdf. Accessed 12 Sept 2017.

  5. Tjong Tjin Tai, TFE. Private law for homo digitalis, use and maintenance. Preliminary Advice for NVJ. 2016. p. 248.

    Google Scholar 

  6. Geldart WM. Legal personality. Law Q Rev. 1911;27:90–108.

    Google Scholar 

  7. Richards NM, King JH. Big data ethics. Wake Forest Law Rev. 2014;49:393–432.

    Google Scholar 

  8. Berriat Saint-Prix J. Rapport et Recherches sur les Procès et Jugemens Relatifs aux Animaux. Paris: Imprimerie de Selligue; 1829.

    Google Scholar 

  9. Delvaux M. Report PE582.443v01-00 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). 2017. Available at: http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+REPORT+A8-2017-0005+0+DOC+PDF+V0//EN. Accessed 8 Dec 2017.

  10. Bertolini A. Robots as products: the case for a realistic analysis of robotic applications and liability rules. Law Innov Technol. 2013;5(2):214–27.

    Article  Google Scholar 

  11. Solum LB. Legal personhood for artificial intelligences. North Carol Law Rev. 1992;70(4):1238–9.

    Google Scholar 

  12. Ohlin JD. Is the concept of person necessary for human rights? Columbia Law Rev. 2005;105:209–49.

    Google Scholar 

  13. Fagundes D. What we talk about when we talk about persons: the language of a legal fiction. Harv Law Rev. 2001;114(6):1745–68.

    Article  Google Scholar 

  14. Naffine N. Who are law’s persons? From Cheshire Cats to responsible subjects. Mod Law Rev. 2003;66(3):346–67.

    Article  Google Scholar 

  15. Hutchinson A. The Whanganui River as a legal person. Altern Law J. 2014;39(3):179–82.

    Article  Google Scholar 

  16. Brownlie I. Principles of public international law. London: Clarendon Press; 1990.

    Google Scholar 

  17. Crawford JR. Brownlie’s principles of public international law. 8th ed. Oxford: Oxford University Press; 2012.

    Book  Google Scholar 

  18. Hobbes T. Chapter xvi: of persons, authors, and things personated. In: Hobbes T, editor. Leviathan. London: Andrew Crooke; 1651.

    Google Scholar 

  19. Pagallo U. The laws of robots: crimes, contracts, and torts. Dordrecht: Springer; 2013.

    Book  Google Scholar 

  20. Bodin J. Les Six Livres de la Republique (Translation by MJ Tooley). Oxford: Blackwell; 1955.

    Google Scholar 

  21. Dewey J. The historic background of corporate legal personality. Yale Law Rev. 1926;35(6):655–73.

    Article  Google Scholar 

  22. Minors D. Can you read my mind? 2017. Available at: https://www.wits.ac.za/news/latest-news/research-news/2017/2017-09/can-you-read-my-mind. Accessed 11 Oct 2017.

  23. The Global Slavery Index. 2016. Available at: https://www.globalslaveryindex.org/findings/. Accessed 12 Oct 2017.

  24. Descartes R. Principia philosophiae. Paris: Vrin; 1973.

    Google Scholar 

  25. Gardner H. The theory of multiple intelligences. New York: Basic Books; 1993.

    Google Scholar 

  26. Wechsler D. The range of human capacities. Baltimore: Williams & Wilkins; 1955.

    Google Scholar 

  27. Turing AM. Computing machinery and intelligence. Mind, New Series. 1950;59(236):433–60.

    Google Scholar 

  28. Darling K. Electronic love, trust, & abuse: social aspects of robotics. Workshop “We Robot” at the University of Miami. 2016.

    Google Scholar 

  29. Mayer CJ. Personalizing the impersonal: corporations and the bill of rights. Hastings Law J. 1990;41(3):577–667.

    Google Scholar 

  30. Shoyama. Intelligent agents: authors, makers, and owners of computer-generated works in Canadian copyright law. Can J Law Technol. 2005;4(2):129.

    Google Scholar 

  31. Russell S, Norvig P. Artificial intelligence: a modern approach. 3rd ed. Upper Saddle River, NJ: Pearson Education; 2010.

    Google Scholar 

  32. Kurzweil R. The age of intelligent machines. Cambridge: The MIT Press; 1990.

    Google Scholar 

  33. Bostrom N. Superintelligence: paths, dangers, strategies. Oxford: Oxford University Press; 2014.

    Google Scholar 

  34. Good IJ. Speculations concerning the first ultraintelligent machine. In: Alt FL, Rubinoff M, editors. Advances in computers, vol. 6. New York: Academic Press; 1965. p. 31–88.

    Google Scholar 

  35. Giliker P. Vicarious liability or liability for the acts of others in tort: a comparative perspective. J Eur Tort Law. 2011;2(1):31–56.

    Article  Google Scholar 

  36. Von Bar C, Clive E, editors. Principles, definitions and model rules of European private law: Draft Common Frame of Reference (CDFR). Munich: Sellier. European Law Publishers GmbH; 2009.

    Google Scholar 

  37. Schaerer E, Kelley R, Nicolescu M. Robots as animals: a framework for liability and responsibility in human-robot interaction. In: Robot and Human Interaction Communication. RO-MAN 2009 – The 18th IEEE International Symposium on Robot and Human Interactive Communication. 2009. p. 72–7.

    Google Scholar 

  38. Erven D onder de Linden en zoon. Boekzaal der geleerde wereld: en tijdschrift voor de Protestantsche kerken in het koningrijk der Nederlanden. 1831. p. 201–3.

    Google Scholar 

  39. Mori M. The uncanny valley: the original essay by Masahiro Mori. 2012. Available at: https://spectrum.ieee.org/automaton/robotics/humanoids/the-uncanny-valley. Accessed 15 Oct 2017.

  40. Safi M. Ganges and Yamuna rivers granted same legal rights as human beings. 2017. Available at: https://www.theguardian.com/world/2017/mar/21/ganges-and-yamuna-rivers-granted-same-legal-rights-as-human-beings. Accessed 13 May 2017.

  41. Lovejoy AO. The great chain of being: a study of the history of an idea. Cambridge: Harvard University Press; 1936.

    Google Scholar 

  42. Wiener N. The human use of human beings. London: Eyre & Spottiswoode; 1950.

    Google Scholar 

  43. Voulon MB. Automatisch contracteren. Dissertation, Leiden University. 2010.

    Google Scholar 

  44. Raskin M. The law and legality of smart contracts. Georgetown Law Technol Rev. 2017;304(1). Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2959166. Accessed 20 Oct 2017.

  45. Segrave K. Vending machines: an American social history. Jefferson, NC: McFarland; 2002.

    Google Scholar 

  46. Carlile R. To the republicans of the Island of Great Britain. Republican. 1822;16(V), (see also chapter 10, digital version [https://bit.ly/2CduCY1]).

  47. Chopra S, White LF. A legal theory for autonomous artificial agents. Ann Arbor, MI: The University of Michigan Press; 2011.

    Book  Google Scholar 

  48. Future of Life Institute. An open letter to the United Nations convention on certain conventional weapons. 2017. Available at: https://futureoflife.org/autonomous-weapons-open-letter-2017/. Accessed 21 Aug 2017.

  49. Teubner G. Rights of non-humans? Electronic agents and animals as new actors in politics and law. Florence: European University Institute; 2007.

    Google Scholar 

  50. Hildebrandt M, Gaakeer J, editors. Human law and computer law: comparative perspectives. Dordrecht: Springer; 2013.

    Google Scholar 

  51. Harari YN. Homo Deus: a brief history of tomorrow. London: Random House; 2017.

    Book  Google Scholar 

  52. Asimov I. The bicentennial man and other stories. London: Victor Gollancz; 1976.

    Google Scholar 

  53. Asimov I, Silverberg R. The positronic man. New York: Doubleday; 1993.

    Google Scholar 

  54. Bryson JJ, Diamantis ME, Grant TD. Of, or, and by the people: the legal lacuna of synthetic persons. Artif Intell Law. 2017;25:273–91.

    Article  Google Scholar 

  55. Science and Technology Committee. Robotics and artificial intelligence. 2016. Available at: https://publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/14506.htm#_idTextAnchor014. Accessed 25 Oct 2017.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert van den Hoven van Genderen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

van den Hoven van Genderen, R. (2019). Does Future Society Need Legal Personhood for Robots and AI?. In: Ranschaert, E., Morozov, S., Algra, P. (eds) Artificial Intelligence in Medical Imaging. Springer, Cham. https://doi.org/10.1007/978-3-319-94878-2_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-94878-2_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-94877-5

  • Online ISBN: 978-3-319-94878-2

  • eBook Packages: MedicineMedicine (R0)

Publish with us

Policies and ethics