Skip to main content

I, Inhuman Lawyer: Developing Artificial Intelligence in the Legal Profession

  • Chapter
  • First Online:
Robotics, AI and the Future of Law

Part of the book series: Perspectives in Law, Business and Innovation ((PLBI))

Abstract

What are the possibilities of having AI lawyers in the true sense—as autonomous, decision-making agents that can legally advise us or represent us? This chapter delves into the problems and possibilities of creating such systems. This idea is inevitably faced with a multitude of challenges, among them the challenge of translating law into an algorithm being the most fundamental for the beginning of the creation of an AI lawyer. Moreover, the chapter examines the linguistic aspects of such a translation and later moves on into the ethical aspect of creating such lawyers and ethically codifying their conduct. This is followed by a brief deliberation on whether Asimov’s Three Laws of Robotics would be helpful in this regard. The ethical discussion results in a proposal for a concept of Fairness by Design, conceived as the minimum standard for ethical behavior instilled in all AI agents. The chapter also attempts to give a general overview of the current state-of-the-art AI technologies employed in the legal domain as well as imagines the future of AI in Law . Subsequently, the chapter imagines an AI agent deal with and resolve the “Solomon test” of splitting the baby. Finally, it is concluded that the advantage of having AI lawyers can be measured by the possibility of redefining the legal profession in its entirety as well as making legal advice and justice more accessible to all.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Asimov (1950), p. 189.

  2. 2.

    Weller (2016).

  3. 3.

    Kennedy (2017), p. 170.

  4. 4.

    Kasperkevic (2015), quot. in. Kennedy (2017), p. 172.

  5. 5.

    Kennedy (2017), p. 172.

  6. 6.

    Anderson (2011), p. 294.

  7. 7.

    Farivar (2016) quot. in. Kennedy (2017), p. 172.

  8. 8.

    Cavoukian (2010).

  9. 9.

    Fingas (2017) quot. in. Kennedy (2017), p. 171.

  10. 10.

    See also Mommers et al. (2009).

  11. 11.

    Frankling and Graesser (1997), quot. in. Brożek and Jakubiec (2017), p. 293.

  12. 12.

    Frankling and Graesser (1997), quot. in. Brożek and Jakubiec (2017), p. 294.

  13. 13.

    Lame (2004), p. 382.

  14. 14.

    Bibel (2004), p. 163.

  15. 15.

    See Mommers et al. (2009) p. 52.

  16. 16.

    McGinnis and Wasick (2015), p. 993.

  17. 17.

    McGinnis and Wasick (2015), p. 997.

  18. 18.

    Bibel (2004), p. 164.

  19. 19.

    Mommers et al. (2009), p. 53.

  20. 20.

    Susskind (2017), p. 191.

  21. 21.

    See also Susskind (2017).

  22. 22.

    This principle is applicable in legal search as in any other. See also McGinnis and Wasick (2015), p. 1017.

  23. 23.

    Bouaziz et al. (2018), p. 2.

  24. 24.

    IBM Watson. Available at: https://www.ibm.com/watson/. Accessed 27 April 2018.

  25. 25.

    IBM Systems & Technology Group (2011), p. 5.

  26. 26.

    Mommers et al. (2009), p. 55.

  27. 27.

    See Wettig and Zehendner (2004).

  28. 28.

    See Footnote 13.

  29. 29.

    Lame (2004), p. 395.

  30. 30.

    Baude and Sachs (2017), p. 1085.

  31. 31.

    Baude and Sachs (2017), p. 1083.

  32. 32.

    Husa (2016), p. 263.

  33. 33.

    See Footnote 30.

  34. 34.

    Baude and Sachs (2017), p. 1123.

  35. 35.

    See Footnote 14.

  36. 36.

    See Footnote 14.

  37. 37.

    See Footnote 14.

  38. 38.

    See Footnote 10.

  39. 39.

    See Footnote 26.

  40. 40.

    Baude and Sachs (2017), p. 1088.

  41. 41.

    Husa (2016), p. 270.

  42. 42.

    See Footnote 19.

  43. 43.

    Moore (2010), p. 4.

  44. 44.

    Moore (2010), p. 7.

  45. 45.

    Kac and Ulam (1968), p. 4.

  46. 46.

    Hodel (1995), p. 1.

  47. 47.

    Henket (2003), p. 1.

  48. 48.

    Leith (1988), p. 32.

  49. 49.

    Bibel (2004), p. 176.

  50. 50.

    See Footnote 49.

  51. 51.

    For the sake of the research questions, it is vital to know that this paper uses a generalized view of the discipline of law .

  52. 52.

    Leith (1991), p. 201.

  53. 53.

    Trevor et al. (2006), p. 1.

  54. 54.

    Leith (1988), p. 34.

  55. 55.

    See Footnote 54.

  56. 56.

    Leith (1988), p. 35.

  57. 57.

    Stolpe (2010), p. 247.

  58. 58.

    Lame (2004), p. 380.

  59. 59.

    See Footnote 58.

  60. 60.

    Mommers et al. (2009), p. 72.

  61. 61.

    Mommers et al. (2009), p. 75.

  62. 62.

    Wallach and Allen (2008), p. 23.

  63. 63.

    Wallach and Allen (2008), p. 17.

  64. 64.

    Guarini (2013), p. 213.

  65. 65.

    Wallach and Allen (2008), p. 13.

  66. 66.

    Nyholm and Smids (2016), p. 1288.

  67. 67.

    Nyholm and Smids (2016), p. 1285.

  68. 68.

    Nyholm and Smids (2016), p. 1280.

  69. 69.

    Shulman et al. (2009), p. 1.

  70. 70.

    Anderson and Anderson (2007), p. 1.

  71. 71.

    See also Anderson and Anderson (2007), p. 4.

  72. 72.

    See also Anderson and Anderson (2007), p. 2.

  73. 73.

    Anderson (2011), p. 287.

  74. 74.

    Anderson and Anderson (2007), p. 4.

  75. 75.

    Anderson and Anderson (2007), p. 1. The example used here is one of care-robots being developed for elderly-homes in the United States and emphasis is placed on the need to instill ethical principles in those robots to ensure proper care and safety.

  76. 76.

    Asimov (1950).

  77. 77.

    Clarke (2011), p. 256.

  78. 78.

    Clarke (2011), p. 259.

  79. 79.

    Asimov (1964), introduction; see also Weng et al. (2009).

  80. 80.

    See, e.g., Weng et al. (2009); Anderson (2008), (2011)

  81. 81.

    Clarke (2011), p. 260.

  82. 82.

    Clarke (2011), p. 272.

  83. 83.

    Nunez (2015), p. 204.

  84. 84.

    This is not the case today, where no AI is actually intelligent. See Storrs Hall (2011), p. 512.

  85. 85.

    See Footnote 8.

  86. 86.

    Mackworth (2011), p. 347.

  87. 87.

    Storrs Hall (2011), p. 522.

  88. 88.

    Storrs Hall (2011), p. 523.

  89. 89.

    Asimov (1950), p. 195.

  90. 90.

    Anderson (2011), p. 295; Kant thought that human-to-human behavior can deteriorate if humans are taught to mistreat other species, therefore it is in the interest of humanity that each human treat other species in a fair way. See Anderson (2011), p. 294.

  91. 91.

    See Anderson (2011).

  92. 92.

    See Bibel (2004).

  93. 93.

    See Bibel (2004); Susskind (2017).

  94. 94.

    Ambrogi (2017).

  95. 95.

    See Oskamp and Lauritsen (2002).

  96. 96.

    See also Leith (1988), p. 34.

  97. 97.

    Remus and Levy (2016), p. 9.

  98. 98.

    Henket (2003), p. 131.

  99. 99.

    Oskamp and Lauritsen (2002), p. 232.

  100. 100.

    See Footnote 99.

  101. 101.

    Winick (2017); see also Cellan-Jones (2017).

  102. 102.

    ROSS. Available at: https://rossintelligence.com. Accessed 27 April 2018.

  103. 103.

    Nunez (2017), p. 193.

  104. 104.

    Nunez (2017), p. 194.

  105. 105.

    Oskamp and Lauritsen (2002).

  106. 106.

    See also Henket (2003).

  107. 107.

    Henket (2003), p. 128; Henket beautifully describes and exemplifies ways in which humans will be essential to the development and upkeep of these systems, at least for the foreseeable future.

  108. 108.

    Henket (2003), p. 128.

  109. 109.

    Henket (2003), p. 129.

  110. 110.

    Oskamp et al. (1995).

  111. 111.

    Neural networks are described as “an artificial imitation of their biological model, the brains and nervous systems of humans and animals. The ability to learn and the use of parallelism during data processing are important characteristics.” Wettig and Zehendner (2004).

  112. 112.

    See also Bibel (2004).

  113. 113.

    Henket (2003), p. 136.

  114. 114.

    Leeson (2017), p. 41.

  115. 115.

    See Footnote 114.

  116. 116.

    For a good analysis of the role of evidence with reference to Solomon’s judgment, see LaRue (2004).

  117. 117.

    LaRue (2004).

  118. 118.

    See Footnote 117.

  119. 119.

    See also Clarke (2011), p. 280.

  120. 120.

    See Footnote 119.

References

  • Ambrogi, R. (2017). Fear not, lawyers, AI is not your enemy. https://abovethelaw.com/2017/10/fear-not-lawyers-ai-is-not-your-enemy/?rf=1. Accessed December 1, 2017.

  • Anderson, S. L. (2008). Asimov’s “Three laws of robotics” and machine metaethics. AI & Society, 22(4), 477–493.

    Article  Google Scholar 

  • Anderson, S. L., & Anderson, M. (2007). The consequences for human beings of creating ethical robots. Sine loco.

    Google Scholar 

  • Anderson, S. L. (2011). The unacceptability of Asimov’s three laws of robotics as a basis for machine ethics. In M. Anderson & S. L. Anderson (Eds.), Machine ethics. Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Asimov, I. (1950). I, Robot. London: Harper Voyager.

    Google Scholar 

  • Asimov, I. (1964). The rest of the robots. New York: Collins.

    Google Scholar 

  • Baude, W., & Sachs, S. E. (2017). The law of interpretation. Harvard Law Review, 130(4), 1082–1147.

    Google Scholar 

  • Bibel, L. W. (2004). AI and the conquest of complexity in law. Artificial Intelligence Law, 12, 159–180.

    Article  Google Scholar 

  • Bouaziz, J., et al. (2018). How artificial Intelligence can improve our understanding of the genes associated with endometriosis: Natural language processing of the PubMed database. Hindawi BioMed Research International, 2018. Article ID 6217812.

    Google Scholar 

  • Brożek, B., & Jakubiec, M. (2017). On the Legal responsibility of autonomous machines. Artificial Intelligence and Law, 25(3), 293–304.

    Article  Google Scholar 

  • Cavoukian, A. (2010). Privacy by design: The definitive workshop. Identity in the Information Society, 3, 247–251.

    Article  Google Scholar 

  • Cellan-Jones, R. (2017). The robot-lawyers are here—And they’re winning, BBC news technology. http://www.bbc.com/news/technology-41829534. Accessed January 1, 2017.

  • Clarke, R. (2011). Asimov’s laws of robotics. In M. Anderson & S. L. Anderson (Eds.), Machine ethics. Cambridge: Cambridge University Press.

    Google Scholar 

  • Farivar, C. (2016). Lawyers: New court software is so awful it’s getting people wrongly arrested. Ars Technica. https://arstechnica.com/tech-policy/2016/12/court-software-glitches-result-in-erroneous-arrests-defense-lawyers-say/. Accessed January 1, 2018.

  • Fingas, J. (2017). Parking ticket chat bot now helps refugees claim asylum. Engadget https://www.engadget.com/2017/03/06/parking-ticket-chat-bot-now-helps-refugees-claim-asylum/. Accessed January 1, 2018.

  • Guarini, M. (2013). Introduction: Machine ethics and the ethics of building intelligent machines. Topoi, 32, 213.

    Article  Google Scholar 

  • Henket, M. (2003). Great expectations: AI and law as an issue for legal semiotics. International Journal for the Semiotics of Law, 16(2), 123–138.

    Article  Google Scholar 

  • Hodel, R. E. (1995). An introduction to mathematical logic. Mineola, New York: Dover Publications Inc.

    Google Scholar 

  • Husa, J. (2016). Translating legal language and comparative law. International Journal for the Semiotics of Law, 30(2), 261–272.

    Article  Google Scholar 

  • IBM Systems & Technology Group. (2011). White paper: Watson—A system designed for answers, The future of workload optimized systems design. http://www-03.ibm.com/innovation/us/engines/assets/9442_Watson_A_System_White_Paper_POW03061-USEN-00_Final_Feb10_11.pdf. Accessed 1 January 2018.

  • Kac, M., & Ulam, S. M. (1968). Mathematics and logic. Mineola, New York: Dover Publications Inc.

    Google Scholar 

  • Kasperkevic, J. (2015). Google says sorry for racist auto-tag in photo app. The Guardian https://www.theguardian.com/technology/2015/jul/01/google-sorry-racist-auto-tag-photo-app. Accessed January 1, 2018.

  • Kennedy, R. (2017). Algorithms and the rule of law. Legal Information Management, 17(3), 170–172.

    Article  Google Scholar 

  • Lame, G. (2004). Using NLP techniques to identify legal ontology components: Concepts and relations. Artificial Intelligence and Law, 12(4), 379–396.

    Article  Google Scholar 

  • LaRue, L. H. (2004). Solomon’s judgment: A short essay on proof. Law, Probability and Risk, 3(1), 13–31.

    Article  Google Scholar 

  • Leeson, P. T. (2017). Split the baby, drink the poison, carry the hot iron, swear on the Bible, adapted from WTF?! An economic tour of the weird. Stanford: Stanford University Press.

    Google Scholar 

  • Leith, P. (1988). The application of AI to law. AI & Society, 2(1), 31–46.

    Article  Google Scholar 

  • Leith, P. (1991). The computerized lawyer: A guide to the use of computers in the legal profession. London: Springer.

    Book  Google Scholar 

  • Mackworth, A. K. (2011). Architectures and ethics for robots, constraint satisfaction as a unitary design framework. In M. Anderson & S. L. Anderson (Eds.), Machine ethics. Cambridge: Cambridge University Press.

    Google Scholar 

  • McGinnis, J. O., & Wasick, S. (2015). Law’s algorithm. Florida Law Review, 66, 991–1050.

    Google Scholar 

  • Mommers, L., et al. (2009). Understanding the law: Improving legal knowledge dissemination by translating the formal sources of law. Artificial Intelligence Law., 17(1), 51–78.

    Article  Google Scholar 

  • Moore, M. E. (Ed.). (2010). Philosophy of Mathematics: Selected writings of Charles S. Pierce. Bloomington and Indianapolis: Indiana University Press. 

    Google Scholar 

  • Nunez, C. (2017). Artificial intelligence and legal ethics: Whether AI Lawyers can make ethical decisions. Tulane Journal of Technology & Intellectual Property, 20, 189–204.

    Google Scholar 

  • Nyholm, S., & Smids, J. (2016). The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Journal of Ethic Theory and Moral Practice, 19(5), 1275–1289.

    Article  Google Scholar 

  • Oskamp, A., & Lauritsen, M. (2002). AI in law practice? So far, not much. Artificial Intelligence and Law, 10(4), 227–236.

    Article  Google Scholar 

  • Oskamp, A., Tragter, M., & Groendijk, C. (1995). AI and law: What about the future? Artificial Intelligence and Law, 3(3), 209–215.

    Article  Google Scholar 

  • Remus, D., & Levy, F. (2016). Can robots be lawyers? Computers, lawyers and the practice of law. https://ssrn.com/abstract=2701092. Accessed January 1, 2018.

  • Shulman, C., Jonsson H., & Tarleton, N. (2009). Machine ethics and superintelligence. In C. Reynolds, & A. Cassinelli (Eds.), AP-CAP 2009: The Fifth Asia-Pacific Computing and Philosophy Conference, Oct 1–2, University of Tokyo, Japan, Proceedings, pp. 95–97.

    Google Scholar 

  • Stolpe, A. (2010). Norm system revision: Theory and application. Artificial Intelligence and Law, 18(3), 247–283.

    Article  Google Scholar 

  • Storrs Hall, J. (2011). Ethics for self-improving machines. In M. Anderson & S. L. Anderson (Eds.), Machine ethics. Cambridge: Cambridge University Press.

    Google Scholar 

  • Susskind, R. (2017). Tomorrow’s lawyers. Oxford: Oxford University Press.

    Google Scholar 

  • Trevor, J. M., Bench-Capon, T. J. M., & Dunne, P. E. (2006). Argumentation in AI and law: Editor’s introduction. Artificial Intelligence and Law, 13(1), 1–8.

    Google Scholar 

  • Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    Google Scholar 

  • Weller, C. (2016). The world’s first artificially intelligent lawyer was just hired at a law firm. http://www.businessinsider.com/the-worlds-first-artificially-intelligent-lawyer-gets-hired-2016-5?r=US&IR=T&IR=T. Accessed December 1, 2017.

  • Weng, Y. H., Chen, C. H., & Sun, C. T. (2009). Toward the human-robot co-existence society: On safety intelligence for next generation robots. International Journal of Social Robotics, 1, 267–282.

    Article  Google Scholar 

  • Wettig, S., & Zehendner, E. (2004). A legal analysis of human and electronic agents. Artificial Intelligence and Law, 12, 111–135.

    Article  Google Scholar 

  • Winick, E. (2017). Lawyer-bots are shaking up jobs. MIT Technology Review. https://www.technologyreview.com/s/609556/lawyer-bots-are-shaking-up-jobs/. Accessed January 1, 2017.

Download references

Acknowledgements

My gratitude goes out to Fredrik Persson and Andrej Bracanović for their input and immense patience during my many brainstorming sessions about robot lawyers.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dena Dervanović .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Dervanović, D. (2018). I, Inhuman Lawyer: Developing Artificial Intelligence in the Legal Profession. In: Corrales, M., Fenwick, M., Forgó, N. (eds) Robotics, AI and the Future of Law. Perspectives in Law, Business and Innovation. Springer, Singapore. https://doi.org/10.1007/978-981-13-2874-9_9

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-2874-9_9

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-2873-2

  • Online ISBN: 978-981-13-2874-9

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics