Skip to main content

The Status and Future of the Turing Test

  • Chapter
Book cover The Turing Test

Part of the book series: Studies in Cognitive Systems ((COGS,volume 30))

Abstract

The standard interpretation of the imitation game is defended over the rival gender interpretation though it is noted that Turing himself proposed several variations of his imitation game. The Turing test is then justified as an inductive test not as an operational definition as commonly suggested. Turing’s famous prediction about his test being passed at the 70% level is disconfirmed by the results of the Loebner 2000 contest and the absence of any serious Turing test competitors from AI on the horizon. But, reports of the death of the Turing test and AI are premature. AI continues to flourish and the test continues to play an important philosophical role in AI. Intelligence attribution, methodological, and visionary arguments are given in defense of a continuing role for the Turing test. With regard to Turing’s predictions one is disconfirmed, one is confirmed, but another is still outstanding.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Reference

  • Block, N. (1981), ‘Psychologism and behaviorism’, Philosophical Review 90, pp. 5–43.

    Article  Google Scholar 

  • Block, N. (1990), The Computer Model of the Mind’,in D.N. Osherson, E.E. Smith, eds.,Thinking: An Invitation to Cognitive Science, Cambridge, Massachusetts: MIT Press, pp. 247–289.

    Google Scholar 

  • Bringsjord, S., Bello, P. and Ferrucci, D. (2001), ‘Creativity, the Turing test and the (better) Lovelace test’, Minds and Machines 11, pp. 3–27.

    Article  Google Scholar 

  • Colby, K.M. (1981), ‘Modeling a paranoid mind’, Behavioral and Brain Sciences 4, pp. 515–560.

    Article  Google Scholar 

  • Colby, K.M., Hilf, F.D., Weber, S. and Kraemer, H.C. (1972). ‘Turing-like indistinguishability tests for the validation of a computer simulation of paranoid processes’, Artificial Intelligence 3, pp. 199–221.

    Article  Google Scholar 

  • Copeland, B.J. (1999), ‘A Lecture and Two Radio Broadcasts on Machine Intelligence by Alan Turing’, in K. Furukawa, D. Michie, S. Mugglegton, eds., Machine Intelligence, Oxford: Oxford University Press, pp. 445–476.

    Google Scholar 

  • Copeland, B.J. (2000), ‘The Turing test’, Minds and Machines 10, pp. 519–539.

    Article  Google Scholar 

  • Erion, G.J. (2001), ‘The Cartesian test for automatism’, Minds and Machines 11, pp. 29–39.

    Article  Google Scholar 

  • Ford, K.M. and Hayes, P.J. (1998), ‘On Computational Wings: Rethinking the Goals of Artificial Intelligence’, Scientific American Presents 9, pp. 78–83.

    Google Scholar 

  • French, R.M. (1990), ‘Subcognition and the limits of the Turing test’, Mind 99, pp. 53–65.

    Article  Google Scholar 

  • Genova, J. (1994), ‘Turing’s Sexual Guessing Gaine’, Social Epistemology 8, pp. 313–326.

    Article  Google Scholar 

  • Guha, R.V., and Lenat, D.B. (1994), ‘Enabling agents to work together’, Communications of the ACM 37, pp. 127–142.

    Article  Google Scholar 

  • Harnard, S. (1991), ‘Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem’, Minds and Machines 1, pp. 43–54.

    Google Scholar 

  • Hauser, L. (2001), ‘Look who’s moving the goal posts now’, Minds and Machines 11, pp. 41–51.

    Article  Google Scholar 

  • Hayes, P.J. and Ford, K.M. (1995), ‘Turing Test Considered Harmful’, Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, pp. 972–977.

    Google Scholar 

  • Ince, D.C., ed. (1992), Collected Works ofA.M. Turing: Mechanical Intelligence, Amsterdam: North Holland.

    Google Scholar 

  • Lenat, D.B. (1990), ‘CYC: Toward Programs with Common Sense’, Communications of the ACM 33, pp. 30–49.

    Article  Google Scholar 

  • Lenat, D.B. (1995), ‘Artificial Intelligence’, Scientific American, pp. 80–82.

    Google Scholar 

  • Lenat, D.B. (1995), ‘CYC: A large-scale investment in Knowledge infrastructure’, Communications of the ACM38, pp. 33–38.

    Google Scholar 

  • Lenat, D.B. (1995), ‘Steps to Sharing Knowledge’, in N.J.I. Mars, ed., Towards Very Large Knowledge Bases. IOS Press, pp. 3–6.

    Google Scholar 

  • Meltzer, B. and Michie, D., eds. (1969), Machine Intelligence, Edinburgh: Edinburgh University Press.

    Google Scholar 

  • Michie, D. (1996), ‘Turing’s Test and Conscious Thought’, in P. Millican, A. Clark, eds., Machines and Thought. Oxford: Clarendon Press, pp. 27–51.

    Google Scholar 

  • Millar, P.H. (1973), ‘On the Point of the Imitation Game’, Mind 82, pp. 595–597.

    Article  Google Scholar 

  • Moor, J.H. (1976), ‘An Analysis of the Turing test’, Philosophical Studies 30, pp. 249–257.

    Article  Google Scholar 

  • Moor, J.H. (1978), ‘Explaining Computer Behavior’, Philosophical Studies 34, pp. 325–327.

    Article  Google Scholar 

  • Moor, J.H. (1987), ‘Turing Test’ in S.C. Shapiro, ed., Encyclopedia of Artificial Intelligence, New York: John Wiley and Sons, pp. 1126–1130.

    Google Scholar 

  • Moor, J.H. (1988), ‘The Pseudorealization fallacy and the Chinese Room Argument’, in J.H. Fetzer, ed., Aspects of Artificial Intelligence, Dordrecht: Springer Science+Business Media Dordrecht, pp. 35–53.

    Chapter  Google Scholar 

  • Moor, J.H. (1998), ‘Assessing Artificial Intelligence and its Critics’, in T.W. Bynum, J.H. Moor, eds., The Digital Phoenix: How Computers Are Changing Philosophy, Oxford: Basil Blackwell Publishers, pp. 213–230.

    Google Scholar 

  • Moor, J.H. (2000a), ‘Turing Test’, in A. Ralston, E.D. Reilly, D. Hemmendinger, eds., Encyclopedia of Computer Science, 4th edition, London: Nature Publishing Group, pp. 1801–1802.

    Google Scholar 

  • Moor, J.H. (2000b), ‘Thinking Must be Computation of the Right Kind’, Proceedings of the Twentieth World Congress of Philosophy 9, Bowling Green, OH: Philosophy Documentation Center, Bowling Green State University, pp. 115–122.

    Google Scholar 

  • Narayaman, A. (1996), ‘The intentional stance and the imitation game’, in P. Millican, A. Clark, eds., Machines and Thought, Oxford: Clarendon Press.

    Google Scholar 

  • Piccinini, G. (2000), ‘Turing’s rules for the imitation game’, Minds and Machines 10, pp. 573–582.

    Article  Google Scholar 

  • Searle, J.R. (1980), ‘Minds, brains and programs’, Behavioral and Brain Sciences 3, pp. 417–457.

    Article  Google Scholar 

  • Stalker, D.F. (1978), ‘Why Machines Can’t Think: A Reply to James Moor’, Philosophical Studies 34, pp. 317–320.

    Article  Google Scholar 

  • Sterrett, S.G. (2000), ‘Turing’s two tests for intelligence’, Minds and Machines 10, pp. 541–559.

    Article  Google Scholar 

  • Traiger, S. (2000), ‘Making the right identification in the Turing test’, Minds and Machines 10, pp. 561–572.

    Article  Google Scholar 

  • Turing, A.M. (1945), ‘Proposal for Development in the Mathematics Division of an Automatic Computing Engine (ACE)’, in D.C. Ince, ed., Collected Works of A.M. Turing: Mechanical Intelligence, Amsterdam: North Holland (1992), pp. 1–86

    Google Scholar 

  • Turing, A.M. (1947), ‘Lecture to the London Mathematical Society on 20 February 1947’, in D.C. Ince, ed., Collected Works of A.M. Turing: Mechanical Intelligence, Amsterdam: North Holland (1992), pp. 87–105.

    Google Scholar 

  • Turing, A.M. (1948), ‘Intelligent Machinery’, National Physical Laboratory Report, in Meltzer and Michie (1969).

    Google Scholar 

  • Turing, A.M. (1950), ‘Computing Machinery and Intelligence’, Mind 59, pp. 433–460.

    Article  Google Scholar 

  • Turing, A.M. (1951a), ‘Can Digital Computers Think?’, BBC Third Programme, in Copeland (1999).

    Google Scholar 

  • Turing, A.M. (1951b), ‘Intelligent Machinery, A Heretical Theory’, Manchester University Lecture, in Copeland (1999).

    Google Scholar 

  • Turing, A.M. (1952), ‘Can Automatic Calculating Machines Be Said to Think?’, BBC Third Programme, in Copeland (1999).

    Google Scholar 

  • Whitby, B. (1996), ‘The Turing Test: AI’s Biggest Blind Alley?’ in P. Millican, A. Clark, eds., Machines and Thought. Oxford: Clarendon Press, pp. 53–62.

    Google Scholar 

  • Zdenek, S. (2001), ‘Passing Loebner’s Turing Test: A Case of Conflicting Discourse Functions’, Minds and Machines 11, pp. 53–76.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Moor, J.H. (2003). The Status and Future of the Turing Test. In: Moor, J.H. (eds) The Turing Test. Studies in Cognitive Systems, vol 30. Springer, Dordrecht. https://doi.org/10.1007/978-94-010-0105-2_11

Download citation

  • DOI: https://doi.org/10.1007/978-94-010-0105-2_11

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-1-4020-1205-1

  • Online ISBN: 978-94-010-0105-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics