Skip to main content

From Galatea 2.2 to Watson – And Back?

  • Chapter
  • First Online:

Part of the book series: Ius Gentium: Comparative Perspectives on Law and Justice ((IUSGENT,volume 25))

Abstract

When Ken Jennings, 74-times winner of the Jeopardy TV quiz, lost against a room-size IBM computer, he wrote on his video screen: ‘I, for one, welcome our new computer overlords’ (citing a popular ‘Simpsons’ phrase). The New York Times writes that ‘for IBM’ this was ‘proof that the company has taken a big step toward a world in which intelligent machines will understand and respond to humans, and perhaps inevitably, replace some of them’ (Markoff 2011). Richard Powers anticipated this event in his 1995 novel on Helen, ‘a box’ that ‘had learned how to read, powered by nothing more than a hidden, firing profusion. Neural cascade, trimmed by self-correction, (…)’ (at 31). Powers describes an experiment that involves a neural net being trained to take the Master’s Comprehensive Exam in English literature. The novel traces the relationship that develops between the main character and the computer he is teaching, all the while raising and rephrasing the questions that have haunted AI research. This chapter addresses the potential implications of engaging computing systems as smart competitors or smart companions, bringing up the question of what it would take to acknowledge their agency by giving them legal personhood.

Interestingly, it is the science part of the narrative, the tale of a machine that learned to live, that proves to be the more moving, the more human one.

Cohen (1995)

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Naming it Helen does remind one of Trojan horses, a nice overlap between world literature and computer science.

  2. 2.

    See Dreyfus (1979, 1992) for a sustained critique from the perspective of phenomenology. His work had a major influence in the field.

  3. 3.

    A remarkable attempt to link the fundamental uncertainties uncovered by the natural sciences with the humanities was made by Prigogyne and Stengers in their well known discussion of chaos theory. The original French title of their book was La nouvelle alliance. Métamorphose de la science (1979).

  4. 4.

    Turing’s (1950) article is a very sophisticated and unorthodox exploration of what he calls ‘the imitation game’. Many of the objections that have been made since then are already foreseen and countered by Turing in this article, see e.g. Russell and Norvig (2009: 1020–1030). The point is not whether one agrees, but to detect to what extent his predictions have come true. See Floridi and Taddeo (2009) for an evaluation of the 2008 Loebner Contest, a yearly event that imitates the Turing Test and nominates ‘the most human machine’ as well as ‘the most human human’. See Christian (2011) who played as human in the 2009 Loebner Contest and came out as ‘most human human’.

  5. 5.

    See Christian (2011), chapter 7 ‘Barging in’ on the silliness as well as the rigidity of much chatbots’ conversation.

  6. 6.

    See Christian (2011), chapter 5 ‘Getting Out of Book’ on the reliance on registered games.

  7. 7.

    Futurist Kurzweil (2005) has coined the term singularity for the moment in time when all problems that are intractable now will be resolved. This will be the moment that ‘humans transcend biology’. Only those who believe that all problems that matter are computable will be relieved to hear this. My point is that even if all problems are computable they are usually computable in different ways, with different outcomes. Back to square one?

  8. 8.

    This kind of robust knowledge, however, requires transparency as to the translations, requiring access to the whole process of knowledge construction. This is not possible as long as this kind of knowledge production is protected by trade secret and/or intellectual property rights.

  9. 9.

    Black (2002). As we know, you can use a knife to slice beef or to kill your fellow; though a technology in itself is neither good nor bad, it is never neutral (Kranzberg 1986).

  10. 10.

    See the IBM White Paper (2011): To achieve the most right answers (in the case of Jeopardy: the most right questions) at a competitive speed, IBM deploys: (1) massive parallelism to consider multiple interpretations and hypothesis; (2) many different experts to integrate, apply and contextually evaluate loosely coupled probabilistic questions with content analysis; (3) confidence estimation on the basis of a range of combined scores; and finally (4) integrating deep and shallow knowledge, leveraging many loosely formed ontologies.

  11. 11.

    Data science is ‘the new kid on the block’. It provides a set of tools to infer knowledge from Big Data and is used in all the sciences now, from the natural sciences, to the life sciences, to medicine and healthcare, the humanities and the social sciences. Plus marketing and customer relationship management, forensic science and police intelligence. See notably Mitchell (2006), Fayyad et al. (1996), Custers (2004), and Hildebrandt and Gutwirth (2008).

  12. 12.

    In fact, in the case of the game of Jeopardy, Watson has to find precise questions to specific answers.

  13. 13.

    Moore (1965), Intel co-founder, predicted that the computing power of chips would increase exponentially (doubling every 2 years). The prediction became a goal for the industry which has so far been met.

  14. 14.

    Meanwhile artificial neural networks have been trained to recognize faces from unlabeled images (using large scale unsupervised learning), Le et al. (2012), see also Markoff (2012).

  15. 15.

    The addition of 2.2 to Galatea seems to refer to version 2.2 of the program that constitutes Helen.

  16. 16.

    For a more extensive discussion see Cole (2009).

  17. 17.

    Shaw’s 1902 Pygmalion (Shaw 1994) was the inspiration of the romantic musical My Fair Lady (World Premiere 1956 on Broadway). Note that Galatea translates as ‘she who is white as milk’, which seems a ‘fair’ translation of Shaw’s Eliza Doolittle, and remember that Weizenbaum’s therapeutic machine was called Eliza.

  18. 18.

    This is – evidently – not to discredit Shakespeare or The Merchant of Venice. It is to say that we cannot take for granted what is relevant and should not too easily think in terms of a canon.

  19. 19.

    Huff argues that the lack of the legal institution of the corporation ‘caused’ the stagnation of the sciences in the Islamic and Chinese traditions.

  20. 20.

    E.g. Allen and Widdison (1996), Chopra and White (2011), Teubner (2006), Sartor (2002), and Wettig and Zehender (2004).

  21. 21.

    On this issue e.g. Dewey (1926), Dahiyat (2010), Dan-Cohen (1986), De Doelder and Tiedemann (1995), Eser et al. (1999), Fisse and Braithwaite (1993), French (1979), and Wells (2001).

  22. 22.

    See e.g. Wells (2001) at 70: ‘Davis proposes a variation based on social contract theory such that punishment for a strict liability offence is related to the unfair advantage gained by the offender. The principle of just punishment requires, us, Davis asserts, to measure punishment in accordance with the seriousness of the harm, but how is seriousness to be measured? One suggested measure could be the unfair advantage the offender gains by doing what the law forbids’. She refers to Davis (1985).

  23. 23.

    GOSS and MASS are my acronyms. Note that GOSS refers to quantitative social science, not to theoretical social science that builds on e.g. Weber or Durkheim. Most simulations of multi-agent systems still depend on methodological individualism, because this simplifies the calculation of emergent behaviours. See e.g. Helbing and Balietti (2011) who suggest that regarding the social sciences ‘investments into experimental research and data mining must be increased to reach the standards in the natural and engineering sciences’; they term this a strategy ‘to quickly increase the objective knowledge about social and economic systems’.

  24. 24.

    Wells refers to Dan-Cohen (1986).

  25. 25.

    I am using the machine-metaphor here to draw attention to non-human systems that consist of interacting human and/or non-human agents, though some would claim that individual human beings are also ‘intelligent machines’.

  26. 26.

    Note that for an entity to act as an agent on behalf of a principal, the agent must be a legal subject. Only then can the ‘intelligent machine’ bind the principal to a contract with a third party.

References

  • Allen, R., and R. Widdison. 1996. Can computers make contracts? Harvard Journal of Law and Technology 9(1): 25–52.

    Google Scholar 

  • Black, E. 2002. IBM and the Holocaust. The strategic alliance between Nazi Germany and America’s most powerful corporation. New York: Crown.

    Google Scholar 

  • Bourgine, P., and F.J. Varela. 1992. Towards a practice of autonomous systems. In Towards a practice of autonomous systems. Proceedings of the first European conference on artificial life, ed. F.J. Varela and P. Bourgine. Cambridge, MA: MIT Press, xi–xviii.

    Google Scholar 

  • Brooks, R. 1991. Intelligence without reason. In: Proceedings of the twelfth international joint conference on artificial intelligence, 569–595, Sydney.

    Google Scholar 

  • Chopra, S., and L.F. White. 2011. A legal theory for autonomous artificial agents. Ann Arbor: University of Michigan Press.

    Google Scholar 

  • Christian, B. 2011. The most human human. What talking with computers teaches us about what it means to be alive. New York: Doubleday.

    Google Scholar 

  • Citron, D.K. 2007. Technological due process. Washington University Law Review 85: 1249–1313.

    Google Scholar 

  • Cohen, R. 1995. Pygmalion in the computer lab. New York Times, July 23. Available at: http://www.nytimes.com/books/98/06/21/specials/powers-galatea.html. Last accessed 10 Oct 2012.

  • Cole, D. 2009. The Chinese room argument. In The Stanford encyclopedia of philosophy, E.N. Zalta, Winter 2009 ed. Available at: http://plato.stanford.edu/archives/win2009/entries/chinese-­room/. Last accessed 14 Aug 2012.

    Google Scholar 

  • Cover, R., M. Minow, M. Ryan, and A. Sarat. 1995. Narrative, violence, and the law. The essays of Robert Cover. Ann Arbor: University of Michigan Press.

    Google Scholar 

  • Custers, B. 2004. The power of knowledge. Ethical, legal, and technological aspects of data mining and group profiling in epidemiology. Nijmegen: Wolf Legal Publishers.

    Google Scholar 

  • Dahiyat, E.A.R. 2010. Intelligent agents and liability: Is it a doctrinal problem or merely a problem of explanation? Artificial Intelligence and Law 18(1): 103–121.

    Article  Google Scholar 

  • Damasio, A.R. 2000. The feeling of what happens: Body and emotion in the making of consciousness. New York: Harcourt Inc.

    Google Scholar 

  • Dan-Cohen, M. 1986. Rights, persons, and organizations: A legal theory for bureaucratic society. London: University of California Press.

    Google Scholar 

  • Davis, M. 1985. How to make the punishment fit the crime? In Criminal justice, ed. J.R. Pennock and J.M. Chapman. New York/London: New York University Press.

    Google Scholar 

  • De Doelder, H., and K. Tiedemann (eds.). 1995. Criminal liability of corporations. Dordrecht: Kluwer Law International.

    Google Scholar 

  • De Mul, J., M. Coolen, and H. Ernste (eds.). (forthcoming). Artificial by nature. Plessner’s philosophical anthropology. Perspectives and prospects. Amsterdam: Amsterdam University Press.

    Google Scholar 

  • Dewey, J. 1926. The historic background of corporate legal personality. Yale Law Journal 35(6): 655–673.

    Article  Google Scholar 

  • Dreyfus, H.L. 1979. What computers can’t do: The limits of artificial intelligence. New York: Harper & Row.

    Google Scholar 

  • Dreyfus, H.L. 1992. What computers still can’t do: A critique of artificial reason. Cambridge, MA/London: MIT Press.

    Google Scholar 

  • Duff, R.A. 2001. Punishment, Communication, and Community. Oxford: Oxford University Press.

    Google Scholar 

  • Eser, A., G. Heine, and B. Huber (eds.). 1999. Criminal responsibility of legal and collective entities. International colloquium Berlin 1998. Freiburg: Edition Iuscrim.

    Google Scholar 

  • Fayyad, U.M., G. Piatetsky-Shapiro, P. Smyth, and R. Uthurusamy. 1996. Advances in knowledge discovery and data mining. Meno Park: AAAI Press/MIT Press.

    Google Scholar 

  • Fisse, B., and J. Braithwaite. 1993. Corporations, crime and accountability. Cambridge: Cambridge University Press.

    Google Scholar 

  • Floridi, L., and M. Taddeo. 2009. Turing’s Imitation game: Still an impossible challenge for all machines and some judges – An evaluation of the 2008 Loebner contest. Mind and Machines 19(1): 145–150.

    Article  Google Scholar 

  • French, P.A. 1979. The corporation as a moral person. American Philosophical Quarterly 16(3): 207–215.

    Google Scholar 

  • Gaakeer, J. 1998. Hope springs eternal: An introduction to the work of James Boyd White. Amsterdam: Amsterdam University Press.

    Book  Google Scholar 

  • Helbing, D., and S. Balietti. 2011. From Social Data Mining to Forecasting Socio-economic Crises. The European Physical Journal Special Topics 195(1): 3–68.

    Book  Google Scholar 

  • Hildebrandt, M. 2011a. Criminal liability and ‘smart’ environments. In Philosophical foundations of criminal law, ed. A. Duff and S. Green. Oxford: Oxford University Press.

    Google Scholar 

  • Hildebrandt, M. 2011b. Autonomic and autonomous “thinking”: Preconditions for criminal accountability. In Law, human agency and autonomic computing, ed. M. Hildebrandt and A. Rouvoy, 141–160. Abingdon: Routledge.

    Google Scholar 

  • Hildebrandt, M. (forthcoming). Eccentric positionality as a precondition of the criminal liability of artificial life forms. In Artificial by nature. Plessner’s philosophical anthropology. Perspectives and prospects, ed. J. de Mul, M. Coolen, and H. Ernste. Amsterdam: Amsterdam University Press.

    Google Scholar 

  • Hildebrandt, M., and S. Gutwirth. 2008. Profiling the European citizen. Cross-disciplinary perspectives. Dordrecht: Springer.

    Book  Google Scholar 

  • Hildebrandt, M., and A. Rouvroy. 2011. Law, human agency and autonomic computing. The philosophy of law meets the philosophy of technology. Abingdon: Routledge.

    Google Scholar 

  • Huff, T.E. 2003. The rise of early modern science. Islam, China, and the West. Cambridge: Cambridge University Press.

    Google Scholar 

  • IBM White Paper. 2011. Watson – A system designed for answers. The future of workload optimization. Available at: http://www.itworld.com/business/242693/watson-system-designed-­answers-future-workload-optimized-systems-design. Last accessed 10 Oct 2012.

  • Ihde, D. 1991. Instrumental realism: The interface between philosophy of science and philosophy of technology. Bloomington: Indiana University Press.

    Google Scholar 

  • Ihde, D. 2008. Ironic technics. Copenhagen: Automatic Press.

    Google Scholar 

  • Karnow, C.E.A. 1997. Future codes: Essays in advanced computer technology and the law. Boston: Artech House.

    Google Scholar 

  • Koops, B.-J., M. Hildebrandt, and D.-O. Jacquet-Chiffelle. 2010. Bridging the accountability gap: Rights for new entities in the information society? Minnesota Journal of Law Science & Technology 11(2): 497–561.

    Google Scholar 

  • Kranzberg, M. 1986. Technology and history: Kranzberg’s laws. Technology and Culture 27(3): 544–560.

    Article  Google Scholar 

  • Kurzweil, R. 2005. The singularity is near: When humans transcend biology. New York: Viking.

    Google Scholar 

  • Le, Q.V., M. Ranzato, R. Monga, M. Devin, K. Chen, G.S. Corrado, J. Dean, and A.Y. Ng. 2012. Building high-level features using large scale unsupervised learning. In Proceedings of the 29th International Conference on Machine Learning (ICML) 2012. Madison: Omnipress. Available at: http://arxiv.org/abs/1112.6209. Last accessed 10 Oct 2012.

  • Leenes, R.E. 1998. Hercules of Karneades. Hard cases in recht en rechtsinformatica. Enschede: Twente University Press.

    Google Scholar 

  • Markoff, J. 2011. On ‘Jeopardy!’ Watson win is all but trivial. The New York Times. Available at: http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html. Accessed 19 Oct 2012.

    Google Scholar 

  • Markoff, J. 2012. How many computers to identify a cat? 16,000. New York Times, June 25.

    Google Scholar 

  • Minsky, M. 1988. The society of mind. New York: Simon & Schuster.

    Google Scholar 

  • Minsky, M. 2006. The emotion machine. New York: Simon & Schuster.

    Google Scholar 

  • Mitchell, T.M. 2006. The discipline of machine learning (Technical Report CMU-ML-06-108). Pittsburgh: Carnegie Mellon University, School of Computer Science. Available at: http://www-cgi.cs.cmu.edu/~tom/pubs/MachineLearningTR.pdf. Last accessed 10 Oct 2012.

  • Moore, G.E. 1965. Cramming more components onto integrated circuits. Electronics Magazine 38(8): 114–117.

    Google Scholar 

  • Pfeifer, R., and J. Bongard. 2007. How the body shapes the way we think. A new view of intelligence. Cambridge, MA: MIT Press.

    Google Scholar 

  • Picard, R. 1995. Affective computing. Cambridge, MA: MIT Press.

    Google Scholar 

  • Plessner, H. 1975. Die Stufen des Organischen under der Mensch. Einleitung in die philosophische Anthropologie. Berlin: De Gruyter.

    Book  Google Scholar 

  • Powers, R. 1995. Galatea 2.2. New York: Picador.

    Google Scholar 

  • Powers, R. 2011. What is artificial intelligence? The New York Times, February 5.

    Google Scholar 

  • Rosu, A. 2002. Parody as cultural memory in Richard Powers’s Galatea 2.2. Connotations 12(2/3): 139–154.

    Google Scholar 

  • Russell, S., and P. Norvig. 2009. Artificial intelligence: A modern approach. Upper Saddle River: Prentice Hall.

    Google Scholar 

  • Sartor, G. 2002. Agents in cyberlaw. In The law of electronic agents: Selected revised papers. In Proceedings of the workshop on the Law of Electronic Agents (LEA 2002), G. Sartor, 3–12. Bologna: CIRSFID Universita di Bologna.

    Google Scholar 

  • Searle, J. 1980. Minds, brains, and programs. The Behavioral and Brain Sciences 3(3): 517–557.

    Google Scholar 

  • Shannon, C.E. 1948a. A mathematical theory of communication. Bell System Technical Journal 27(3): 379–423.

    Google Scholar 

  • Shannon, C.E. 1948b. A mathematical theory of communication. Bell System Technical Journal 27(4): 623–656.

    Google Scholar 

  • Shaw, G.B. 1994. Pygmalion. Mineola NJ: Dover Publications.

    Google Scholar 

  • Simon, H.A. 1996. The sciences of the artificial. Cambridge, MA: MIT Press.

    Google Scholar 

  • Solum, L.B. 1992. Legal personhood for artificial intelligences. North Carolina Law Review 70(2): 1231–1287.

    Google Scholar 

  • Steels, L. 1995. When are robots intelligent autonomous agents? Robotics and Autonomous Systems 15: 3–9.

    Article  Google Scholar 

  • Teubner, G. 2006. Rights of non-humans? Electronic agents and animals as new actors in politics and law. Journal of Law and Society 33: 497–521.

    Article  Google Scholar 

  • Turing, A.M. 1950. Computing machinery and intelligence. Mind 59(236): 433–460.

    Article  Google Scholar 

  • Van der Linden-Smith, T. 2001. Een duidelijk geval: geautomatiseerde afhandeling, NOW/ITeR-­serie 41. Den Haag: SDU Uitgevers.

    Google Scholar 

  • Velasquez, J.D. 1998. Modeling emotion-based decision making. In Proceedings of the 1998 fall symposium emotional and intelligent: The tangled knot of cognition, Technical Report FS-98-03, ed. D. Canamero, 164–169. Menlo Park: AAAI Press. Available at: http://www.global-­media.org/neome/docs/PDF's/01%20-%20the%20best%20ones/emotional%20agents.pdf. Last accessed 10 Oct 2012.

  • Weizenbaum, J. 1976. Computer power and human reason: From judgment to calculation. San Francisco: W.H. Freeman & Co.

    Google Scholar 

  • Wells, C. 2001. Corporations and criminal responsibility. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Wettig, S., and E. Zehender. 2004. A legal analysis of human and electronic agents. Artificial Intelligence and Law 12(1–2): 111–135.

    Article  Google Scholar 

  • White, J.B. 1990. Justice as translation: An essay in cultural and legal criticism. Chicago: University of Chicago Press.

    Google Scholar 

  • Wiener, N. 1948. Cybernetics: Or control and communication in the animal and the machine. Cambridge, MA: MIT Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mireille Hildebrandt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media Dordrecht.

About this chapter

Cite this chapter

Hildebrandt, M. (2013). From Galatea 2.2 to Watson – And Back?. In: Hildebrandt, M., Gaakeer, J. (eds) Human Law and Computer Law: Comparative Perspectives. Ius Gentium: Comparative Perspectives on Law and Justice, vol 25. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-6314-2_2

Download citation

Publish with us

Policies and ethics