Abstract
What are the possibilities of having AI lawyers in the true sense—as autonomous, decision-making agents that can legally advise us or represent us? This chapter delves into the problems and possibilities of creating such systems. This idea is inevitably faced with a multitude of challenges, among them the challenge of translating law into an algorithm being the most fundamental for the beginning of the creation of an AI lawyer. Moreover, the chapter examines the linguistic aspects of such a translation and later moves on into the ethical aspect of creating such lawyers and ethically codifying their conduct. This is followed by a brief deliberation on whether Asimov’s Three Laws of Robotics would be helpful in this regard. The ethical discussion results in a proposal for a concept of Fairness by Design, conceived as the minimum standard for ethical behavior instilled in all AI agents. The chapter also attempts to give a general overview of the current state-of-the-art AI technologies employed in the legal domain as well as imagines the future of AI in Law . Subsequently, the chapter imagines an AI agent deal with and resolve the “Solomon test” of splitting the baby. Finally, it is concluded that the advantage of having AI lawyers can be measured by the possibility of redefining the legal profession in its entirety as well as making legal advice and justice more accessible to all.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Asimov (1950), p. 189.
- 2.
Weller (2016).
- 3.
Kennedy (2017), p. 170.
- 4.
- 5.
Kennedy (2017), p. 172.
- 6.
Anderson (2011), p. 294.
- 7.
- 8.
Cavoukian (2010).
- 9.
- 10.
See also Mommers et al. (2009).
- 11.
Frankling and Graesser (1997), quot. in. Brożek and Jakubiec (2017), p. 293.
- 12.
Frankling and Graesser (1997), quot. in. Brożek and Jakubiec (2017), p. 294.
- 13.
Lame (2004), p. 382.
- 14.
Bibel (2004), p. 163.
- 15.
See Mommers et al. (2009) p. 52.
- 16.
McGinnis and Wasick (2015), p. 993.
- 17.
McGinnis and Wasick (2015), p. 997.
- 18.
Bibel (2004), p. 164.
- 19.
Mommers et al. (2009), p. 53.
- 20.
Susskind (2017), p. 191.
- 21.
See also Susskind (2017).
- 22.
This principle is applicable in legal search as in any other. See also McGinnis and Wasick (2015), p. 1017.
- 23.
Bouaziz et al. (2018), p. 2.
- 24.
IBM Watson. Available at: https://www.ibm.com/watson/. Accessed 27 April 2018.
- 25.
IBM Systems & Technology Group (2011), p. 5.
- 26.
Mommers et al. (2009), p. 55.
- 27.
See Wettig and Zehendner (2004).
- 28.
See Footnote 13.
- 29.
Lame (2004), p. 395.
- 30.
Baude and Sachs (2017), p. 1085.
- 31.
Baude and Sachs (2017), p. 1083.
- 32.
Husa (2016), p. 263.
- 33.
See Footnote 30.
- 34.
Baude and Sachs (2017), p. 1123.
- 35.
See Footnote 14.
- 36.
See Footnote 14.
- 37.
See Footnote 14.
- 38.
See Footnote 10.
- 39.
See Footnote 26.
- 40.
Baude and Sachs (2017), p. 1088.
- 41.
Husa (2016), p. 270.
- 42.
See Footnote 19.
- 43.
Moore (2010), p. 4.
- 44.
Moore (2010), p. 7.
- 45.
Kac and Ulam (1968), p. 4.
- 46.
Hodel (1995), p. 1.
- 47.
Henket (2003), p. 1.
- 48.
Leith (1988), p. 32.
- 49.
Bibel (2004), p. 176.
- 50.
See Footnote 49.
- 51.
For the sake of the research questions, it is vital to know that this paper uses a generalized view of the discipline of law .
- 52.
Leith (1991), p. 201.
- 53.
Trevor et al. (2006), p. 1.
- 54.
Leith (1988), p. 34.
- 55.
See Footnote 54.
- 56.
Leith (1988), p. 35.
- 57.
Stolpe (2010), p. 247.
- 58.
Lame (2004), p. 380.
- 59.
See Footnote 58.
- 60.
Mommers et al. (2009), p. 72.
- 61.
Mommers et al. (2009), p. 75.
- 62.
Wallach and Allen (2008), p. 23.
- 63.
Wallach and Allen (2008), p. 17.
- 64.
Guarini (2013), p. 213.
- 65.
Wallach and Allen (2008), p. 13.
- 66.
Nyholm and Smids (2016), p. 1288.
- 67.
Nyholm and Smids (2016), p. 1285.
- 68.
Nyholm and Smids (2016), p. 1280.
- 69.
Shulman et al. (2009), p. 1.
- 70.
Anderson and Anderson (2007), p. 1.
- 71.
See also Anderson and Anderson (2007), p. 4.
- 72.
See also Anderson and Anderson (2007), p. 2.
- 73.
Anderson (2011), p. 287.
- 74.
Anderson and Anderson (2007), p. 4.
- 75.
Anderson and Anderson (2007), p. 1. The example used here is one of care-robots being developed for elderly-homes in the United States and emphasis is placed on the need to instill ethical principles in those robots to ensure proper care and safety.
- 76.
Asimov (1950).
- 77.
Clarke (2011), p. 256.
- 78.
Clarke (2011), p. 259.
- 79.
- 80.
- 81.
Clarke (2011), p. 260.
- 82.
Clarke (2011), p. 272.
- 83.
Nunez (2015), p. 204.
- 84.
This is not the case today, where no AI is actually intelligent. See Storrs Hall (2011), p. 512.
- 85.
See Footnote 8.
- 86.
Mackworth (2011), p. 347.
- 87.
Storrs Hall (2011), p. 522.
- 88.
Storrs Hall (2011), p. 523.
- 89.
Asimov (1950), p. 195.
- 90.
- 91.
See Anderson (2011).
- 92.
See Bibel (2004).
- 93.
- 94.
Ambrogi (2017).
- 95.
See Oskamp and Lauritsen (2002).
- 96.
See also Leith (1988), p. 34.
- 97.
Remus and Levy (2016), p. 9.
- 98.
Henket (2003), p. 131.
- 99.
Oskamp and Lauritsen (2002), p. 232.
- 100.
See Footnote 99.
- 101.
- 102.
ROSS. Available at: https://rossintelligence.com. Accessed 27 April 2018.
- 103.
Nunez (2017), p. 193.
- 104.
Nunez (2017), p. 194.
- 105.
Oskamp and Lauritsen (2002).
- 106.
See also Henket (2003).
- 107.
Henket (2003), p. 128; Henket beautifully describes and exemplifies ways in which humans will be essential to the development and upkeep of these systems, at least for the foreseeable future.
- 108.
Henket (2003), p. 128.
- 109.
Henket (2003), p. 129.
- 110.
Oskamp et al. (1995).
- 111.
Neural networks are described as “an artificial imitation of their biological model, the brains and nervous systems of humans and animals. The ability to learn and the use of parallelism during data processing are important characteristics.” Wettig and Zehendner (2004).
- 112.
See also Bibel (2004).
- 113.
Henket (2003), p. 136.
- 114.
Leeson (2017), p. 41.
- 115.
See Footnote 114.
- 116.
For a good analysis of the role of evidence with reference to Solomon’s judgment, see LaRue (2004).
- 117.
LaRue (2004).
- 118.
See Footnote 117.
- 119.
See also Clarke (2011), p. 280.
- 120.
See Footnote 119.
References
Ambrogi, R. (2017). Fear not, lawyers, AI is not your enemy. https://abovethelaw.com/2017/10/fear-not-lawyers-ai-is-not-your-enemy/?rf=1. Accessed December 1, 2017.
Anderson, S. L. (2008). Asimov’s “Three laws of robotics” and machine metaethics. AI & Society, 22(4), 477–493.
Anderson, S. L., & Anderson, M. (2007). The consequences for human beings of creating ethical robots. Sine loco.
Anderson, S. L. (2011). The unacceptability of Asimov’s three laws of robotics as a basis for machine ethics. In M. Anderson & S. L. Anderson (Eds.), Machine ethics. Cambridge: Cambridge University Press.
Asimov, I. (1950). I, Robot. London: Harper Voyager.
Asimov, I. (1964). The rest of the robots. New York: Collins.
Baude, W., & Sachs, S. E. (2017). The law of interpretation. Harvard Law Review, 130(4), 1082–1147.
Bibel, L. W. (2004). AI and the conquest of complexity in law. Artificial Intelligence Law, 12, 159–180.
Bouaziz, J., et al. (2018). How artificial Intelligence can improve our understanding of the genes associated with endometriosis: Natural language processing of the PubMed database. Hindawi BioMed Research International, 2018. Article ID 6217812.
Brożek, B., & Jakubiec, M. (2017). On the Legal responsibility of autonomous machines. Artificial Intelligence and Law, 25(3), 293–304.
Cavoukian, A. (2010). Privacy by design: The definitive workshop. Identity in the Information Society, 3, 247–251.
Cellan-Jones, R. (2017). The robot-lawyers are here—And they’re winning, BBC news technology. http://www.bbc.com/news/technology-41829534. Accessed January 1, 2017.
Clarke, R. (2011). Asimov’s laws of robotics. In M. Anderson & S. L. Anderson (Eds.), Machine ethics. Cambridge: Cambridge University Press.
Farivar, C. (2016). Lawyers: New court software is so awful it’s getting people wrongly arrested. Ars Technica. https://arstechnica.com/tech-policy/2016/12/court-software-glitches-result-in-erroneous-arrests-defense-lawyers-say/. Accessed January 1, 2018.
Fingas, J. (2017). Parking ticket chat bot now helps refugees claim asylum. Engadget https://www.engadget.com/2017/03/06/parking-ticket-chat-bot-now-helps-refugees-claim-asylum/. Accessed January 1, 2018.
Guarini, M. (2013). Introduction: Machine ethics and the ethics of building intelligent machines. Topoi, 32, 213.
Henket, M. (2003). Great expectations: AI and law as an issue for legal semiotics. International Journal for the Semiotics of Law, 16(2), 123–138.
Hodel, R. E. (1995). An introduction to mathematical logic. Mineola, New York: Dover Publications Inc.
Husa, J. (2016). Translating legal language and comparative law. International Journal for the Semiotics of Law, 30(2), 261–272.
IBM Systems & Technology Group. (2011). White paper: Watson—A system designed for answers, The future of workload optimized systems design. http://www-03.ibm.com/innovation/us/engines/assets/9442_Watson_A_System_White_Paper_POW03061-USEN-00_Final_Feb10_11.pdf. Accessed 1 January 2018.
Kac, M., & Ulam, S. M. (1968). Mathematics and logic. Mineola, New York: Dover Publications Inc.
Kasperkevic, J. (2015). Google says sorry for racist auto-tag in photo app. The Guardian https://www.theguardian.com/technology/2015/jul/01/google-sorry-racist-auto-tag-photo-app. Accessed January 1, 2018.
Kennedy, R. (2017). Algorithms and the rule of law. Legal Information Management, 17(3), 170–172.
Lame, G. (2004). Using NLP techniques to identify legal ontology components: Concepts and relations. Artificial Intelligence and Law, 12(4), 379–396.
LaRue, L. H. (2004). Solomon’s judgment: A short essay on proof. Law, Probability and Risk, 3(1), 13–31.
Leeson, P. T. (2017). Split the baby, drink the poison, carry the hot iron, swear on the Bible, adapted from WTF?! An economic tour of the weird. Stanford: Stanford University Press.
Leith, P. (1988). The application of AI to law. AI & Society, 2(1), 31–46.
Leith, P. (1991). The computerized lawyer: A guide to the use of computers in the legal profession. London: Springer.
Mackworth, A. K. (2011). Architectures and ethics for robots, constraint satisfaction as a unitary design framework. In M. Anderson & S. L. Anderson (Eds.), Machine ethics. Cambridge: Cambridge University Press.
McGinnis, J. O., & Wasick, S. (2015). Law’s algorithm. Florida Law Review, 66, 991–1050.
Mommers, L., et al. (2009). Understanding the law: Improving legal knowledge dissemination by translating the formal sources of law. Artificial Intelligence Law., 17(1), 51–78.
Moore, M. E. (Ed.). (2010). Philosophy of Mathematics: Selected writings of Charles S. Pierce. Bloomington and Indianapolis: Indiana University Press.
Nunez, C. (2017). Artificial intelligence and legal ethics: Whether AI Lawyers can make ethical decisions. Tulane Journal of Technology & Intellectual Property, 20, 189–204.
Nyholm, S., & Smids, J. (2016). The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Journal of Ethic Theory and Moral Practice, 19(5), 1275–1289.
Oskamp, A., & Lauritsen, M. (2002). AI in law practice? So far, not much. Artificial Intelligence and Law, 10(4), 227–236.
Oskamp, A., Tragter, M., & Groendijk, C. (1995). AI and law: What about the future? Artificial Intelligence and Law, 3(3), 209–215.
Remus, D., & Levy, F. (2016). Can robots be lawyers? Computers, lawyers and the practice of law. https://ssrn.com/abstract=2701092. Accessed January 1, 2018.
Shulman, C., Jonsson H., & Tarleton, N. (2009). Machine ethics and superintelligence. In C. Reynolds, & A. Cassinelli (Eds.), AP-CAP 2009: The Fifth Asia-Pacific Computing and Philosophy Conference, Oct 1–2, University of Tokyo, Japan, Proceedings, pp. 95–97.
Stolpe, A. (2010). Norm system revision: Theory and application. Artificial Intelligence and Law, 18(3), 247–283.
Storrs Hall, J. (2011). Ethics for self-improving machines. In M. Anderson & S. L. Anderson (Eds.), Machine ethics. Cambridge: Cambridge University Press.
Susskind, R. (2017). Tomorrow’s lawyers. Oxford: Oxford University Press.
Trevor, J. M., Bench-Capon, T. J. M., & Dunne, P. E. (2006). Argumentation in AI and law: Editor’s introduction. Artificial Intelligence and Law, 13(1), 1–8.
Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.
Weller, C. (2016). The world’s first artificially intelligent lawyer was just hired at a law firm. http://www.businessinsider.com/the-worlds-first-artificially-intelligent-lawyer-gets-hired-2016-5?r=US&IR=T&IR=T. Accessed December 1, 2017.
Weng, Y. H., Chen, C. H., & Sun, C. T. (2009). Toward the human-robot co-existence society: On safety intelligence for next generation robots. International Journal of Social Robotics, 1, 267–282.
Wettig, S., & Zehendner, E. (2004). A legal analysis of human and electronic agents. Artificial Intelligence and Law, 12, 111–135.
Winick, E. (2017). Lawyer-bots are shaking up jobs. MIT Technology Review. https://www.technologyreview.com/s/609556/lawyer-bots-are-shaking-up-jobs/. Accessed January 1, 2017.
Acknowledgements
My gratitude goes out to Fredrik Persson and Andrej Bracanović for their input and immense patience during my many brainstorming sessions about robot lawyers.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Dervanović, D. (2018). I, Inhuman Lawyer: Developing Artificial Intelligence in the Legal Profession. In: Corrales, M., Fenwick, M., Forgó, N. (eds) Robotics, AI and the Future of Law. Perspectives in Law, Business and Innovation. Springer, Singapore. https://doi.org/10.1007/978-981-13-2874-9_9
Download citation
DOI: https://doi.org/10.1007/978-981-13-2874-9_9
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-2873-2
Online ISBN: 978-981-13-2874-9
eBook Packages: Law and CriminologyLaw and Criminology (R0)