Encyclopedia of the Philosophy of Law and Social Philosophy

Living Edition
| Editors: Mortimer Sellers, Stephan Kirste

Artificial Intelligence

  • Leonardo ParentoniEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-94-007-6730-0_745-1
  • 26 Downloads

Introduction: A Bit of History

The first scholarly inquiries into artificial intelligence (“AI”) date back to 1943 and are linked to the work of Warren McCulloch and Walter Pitts in the United States (Russell and Norvig 2010, p. 16). However, the first use of the expression “artificial intelligence” occurred in 1956, when it was adopted by John McCarthy, during a lecture at the Dartmouth College in Hanover, New Hampshire (Kaplan 2016, p. 13). Many other papers about the subject were published in the following years, as described by Nilsson (2009, pp. 71–85).

It is worth discussing the “Turing test” – also known as the “imitation game” (Turing 1950, pp. 433–460). In 1950, Alan Turing designed a test to identify whether a machine evolved to the point that it would be able to produce results indistinguishable from the results produced by human actions. A machine that can pass this test could be, according to Turing, called “intelligent.” Despite its historical importance, the Turing test is controversial. On one hand, some scholars argue that it is useless because of various methodological flaws (Hayes and Ford 1995). On the other hand, similar tests adapted Turing’s idea to the current level of technological development and are still in use. One of them claimed that a software chatbot (called “Eugene Goostman”) passed the test in 2014 (Sample and Hern 2014). The software posed as a 13-year-old boy and was allegedly able to persuade 10 out of 30 members of the London Royal Society to believe that it was human.

Although AI has existed since the 1940s, advancements in the field during the following decades were so modest that this period was called “AI winter” (Russell and Norvig 2010, p. 24). Systems from that time did not even remotely resemble the kind of applications we now recognize as AI. However, it is interesting to mention that a feature of AI systems is that they are socially recognized as AI only when they are new, when they start performing tasks that had been restricted to humans. After a while, when a software performing these tasks becomes routine, people no longer call it AI. As an example, chess playing software was one of the first AI gaming applications. Now, however, such programs are no longer associated with AI.

It is no coincidence that the most remarkable developments in the field are happening during the twenty-first century. Indeed, the passage from an analog to a digital society, accelerated in this century, was the key factor for the fast evolution of AI systems (Calo 2017, p. 4). Dramatic increases in hardware performance and production of digital data, coupled with the reduction of data storage costs and real time network connections, helped create an environment in which AI flourishes.

Definition of Artificial Intelligence

There is no consensus about the definition of AI. In broad terms, Ryan Calo conceptualizes it as “a set of techniques aimed at approximating some aspect of human or animal cognition using machines” (2017, p. 4). In colloquial terms, AI applications intend to automate human tasks through the use of machines, in a faster, more accurate, and safer manner than when the tasks are performed by humans. AI applications even go as far as performing tasks that are not possible for humans to do, due to our biological limitations.

AI systems range from ordinary applications (e.g., cellphone voice assistants) to very sophisticated systems capable of driving cars, performing medical diagnostics, profiling people, or even controlling entire sectors in a given industry. In fact, the term AI encompasses a wide range of techniques in many scientific areas, especially computer sciences. There are various subfields such as robotics, machine learning, neural networks, computer vision, facial and speech recognition. Each of these subfields, due to their complexity, could be the subject of a separate entry in this encyclopedia.

There are also different ways to deliver an AI product or service in the market. It could be a software that runs “in the cloud” with no physical component (usually called a “bot”), or it could have a physical structure (as an embodied robot). Among the physical AI applications, some of them are machines like the ones used in manufacturing, while others resemble animals or human beings (androids or humanoids). The intent here is just to highlight that AI is not a unitary concept. Instead, it is an umbrella term that gathers very distinct technologies and ways of delivering it to the market.

Although far from the current state of technological development, there is a discussion about AI systems advancing towards self-development. Currently, AI systems are designed to carry out specific tasks (“weak” or “narrow” AI), instead of learning to perform any activities like humans do (“strong” or “wide” AI). In case of weak AI, regarding the specific task it was designed to perform, an AI system can heavily surpass human capabilities. However, it is extremely difficult for that system to perform completely different activities. For example, a software designed to play a game can beat the human world champion many times in a row. Yet, the same software could not do tasks as simple as washing a glass or answering a call. The more task oriented an application is, the more accurate it tends to be, at least in the current stage of technological development. Nevertheless, some scholars point out that the next evolutionary step is to have AI systems fully conscious of themselves and capable of learning totally different skills from scratch, without any human supervision (Kurzweil 2005; Bostrom 2014). If ever achieved, this evolutionary stage would be called singularity (Kaplan 2016, p. 138). Some authors point out that “in utopian versions of digital consciousness, we humans don’t fight with machines; we join with them, uploading our brains into the cloud and otherwise becoming part of a ‘technological singularity’. (…) Once this happens, things become highly unpredictable. Machines could become self-aware, humans and computers could merge seamlessly, or other fundamental transitions could occur” (Brynjolfsson and McAfee 2016, p. 255).

Legal Questions Related to Artificial Intelligence

AI is definitely a revolution in the legal field. Many legal activities are being – and will continue to be – deeply affected by it. AI legal impacts will be briefly clustered into two major groups: (1) the ones affecting how we perform legal tasks and (2) the ones related to who performs these tasks.

The first group comprises AI systems designed to help legal professionals perform their jobs. Although scholars disagree about which will be the activities most affected and how each of them will suffer the impacts of a given technology, it is undisputed that these effects tend to spread and intensify in the following years (Susskind 2013, p. 3). These changes range from the automation of document drafting and legal counseling activities to new software designed to help judges at deciding cases. The first empirical studies about this transition have already been undertaken (Remus and Levy 2016). More daring proposals advocate for a deep change in the legal system itself, altering the very way law is comprehended, produced, and applied (Casey and Niblett 2017).

The second group consists of AI systems allegedly able to replace legal professionals at some tasks and raise the issue of unemployment. This group includes online platforms with software designed to provide various legal services. In the most extreme cases, some scholars have discussed the possibility of attributing legal personhood to some AI systems (Solum 1992) since they are theoretically able to act as independent agents and capable of taking unpredictable decisions – also called “emergent behavior” (Zimmerman 2016).

Regulating AI

Various instruments are available to regulate AI, ranging from market self-regulation to strict statutes. Currently there is no worldwide consensus on the optimal approach. The USA, the UK, Europe, and China, for example, decided to adopt distinct strategies, though they do overlap in some respects (Cath et al. 2017), such as the concern about unemployment and the idea that any kind of regulation should foster technological development.

Regulating AI is not as simple as coding the so-called “Asimov’s laws” into the AI software (Balkin 2017, p. 3). Human values and different expectations play an important role and vary across countries, cultures, and time.

Although there is no “one size fits all” solution, some steps could help clarify what constitutes desirable regulation. First, it is important to decide who should regulate (the government, an autonomous agency, market self-regulation, a mix of them, etc.). Second, is the need to precisely define the object of regulation (just software, all embodied robots, humanoids only, etc.). Third, we need to determine the scope of regulation (local, national, or international regulation). Fourth, there is a need to consider an ex ante or ex post strategy. Fifth, it must be determined whether the regulation will be general (encompassing all kinds of AI systems, in the most various markets – an “omnibus laws”) or sector-specific. Sixth, there is a need to determine if there will be registration requirements and who is liable for a failure to register (Scherer 2016).

These steps are not a full blueprint to regulate AI and they are not intended to be. They are just a roadmap for initiating the discussion.

Conclusion

Due to its potential – and risks – AI is a matter of global concern. It is crucial for governments, companies, third sector organizations, and political communities to decide what kind of society we want to deliver to future generations, which values should guide the evolution of AI systems, and which are the limits and constraints we should impose on them (while we still can). There is no single straightforward answer to these questions. And that is excellent, because it compels us to think, talk, and collaborate, reminding everyone of what makes us human.

Cross-References

References

  1. Balkin JM (2017) The three laws of robotics in the age of big data. Yale Law School research paper no 592, p 01-28, Yale Law School – New Haven, Connecticut. August 2017Google Scholar
  2. Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, OxfordGoogle Scholar
  3. Brynjolfsson E, McAfee A (2016) The second machine age: work, progress, and prosperity in a time of brilliant technologies. Norton & Company, New YorkGoogle Scholar
  4. Calo R (2017) Artificial intelligence policy: a primer and roadmap. University of Washington research paper 01-28. University of Washington Law School – Washington, DC. August 2017Google Scholar
  5. Casey AJ, Niblett A (2017) The death of rules and standards. Indiana Law J 92(4):1.401–1.447. Bloomington: Maurer School of LawGoogle Scholar
  6. Cath C et al (2017) Artificial intelligence and the ‘good society’: the US, EU, and UK approach. Sci Eng Ethics 23(2):1–24. New York: SpringerCrossRefGoogle Scholar
  7. Hayes P, Ford K (1995) Turing test considered harmful. In: Proceedings of the 14th international joint conference on Artificial intelligence – IJCAI-95, vol 1. Morgan Kaufmann Publishers, Montreal, pp 972–977Google Scholar
  8. Kaplan J (2016) Artificial intelligence: what everyone needs to know. Oxford University Press, OxfordGoogle Scholar
  9. Kurzweil R (2005) The singularity is near: when humans transcend biology. Penguin Group, New YorkGoogle Scholar
  10. Nilsson NJ (2009) The quest for artificial intelligence: a history of ideas and achievements. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  11. Remus D, Levy F (2016) Can robots be lawyers? Computers, lawyers, and the practice of law. Massachusetts Institute of Technology research paper 01-77. Massachusetts Institute of Technology – Cambridge, MassachusettsGoogle Scholar
  12. Russell SJ, Norvig P (2010) Artificial intelligence: a modern approach, 3rd edn. Prentice-Hall, Upper Saddle RiverGoogle Scholar
  13. Sample I, Hern A (2014) Scientists dispute whether computer ‘Eugene Goostman’ passed Turing test. The Guardian, 9 JuneGoogle Scholar
  14. Scherer MU (2016) Regulating artificial intelligence systems: risks, challenges, competencies and strategies. Harv J Law Technol 29(2):353–400. Cambridge: Harvard Law SchoolGoogle Scholar
  15. Solum LB (1992) Legal personhood for artificial intelligences. N C Law Rev 70(4):1.231–1.288. Chapel Hill: The University of North Carolina School of LawGoogle Scholar
  16. Susskind R (2013) Tomorrow’s lawyers: an introduction to your future. Oxford University Press, OxfordGoogle Scholar
  17. Turing AM (1950) Computing machinery and intelligence. Mind 49:433–460CrossRefGoogle Scholar
  18. Zimmerman EJ (2016) Machine minds: frontiers in legal personhood. University of Chicago working paper 01-41. University of Chicago Law School – Chicago, IllinoisGoogle Scholar

Copyright information

© Springer Nature B.V. 2020

Authors and Affiliations

  1. 1.The Federal University of Minas Gerais – UFMGBelo HorizonteBrazil

Section editors and affiliations

  • Marcelo Galuppo
    • 1
  • Vitor Medrado
    • 2
  1. 1.College of LawFederal University of Minas GeraisBelo HorizonteBrazil
  2. 2.Pontifical Catholic University of Minas GeraisBelo HorizonteBrazil