Skip to main content
Log in

Imagination machines, Dartmouth-based Turing tests, & a potted history of responses

  • Curmudgeon Corner
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Mahadevan (2018, AAAI Conference. https://people.cs.umass.edu/~mahadeva/papers/aaai2018-imagination.pdf) proposes that we are at the cusp of imagination science, one of whose primary concerns will be the design of imagination machines. Programs have been written that are capable of generating jokes (Kim Binsted’s JAPE), producing line-drawings that have been exhibited at such galleries as the Tate (Harold Cohen’s AARON), composing music in several styles reminiscent of such greats as Vivaldi and Mozart (David Cope’s Emmy), proving geometry theorems (Herb Gelernter’s IBM program), and inducing quantitative laws from empirical data (Pat Langley, Gary Bradshaw, Jan Zytkow, and Herbert Simon’s BACON). In recent years, Dartmouth has been hosting Turing Tests in creativity in three categories: short stories, sonnets, and dance music DJ sets. In this post, I will provide a brief and non-exhaustive survey of some plausible responses to these imagination machines and the related prospects for our understanding of the imagination.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. According to Mahadevan, this imagination science will be concerned in addition with the extension of data science beyond its current realm of learning probability distributions from samples.

  2. This should come as no surprise to us: Dartmouth College could well lay claim to being the spiritual home of AI. The term ‘AI’ had originally been coined to distinguish the subject matter of a Dartmouth-based summer 1956 research workshop from automata studies and cybernetic research. This Dartmouth-based ‘Summer Research Project on Artificial Intelligence’ workshop would in turn set the agenda for AI research and usher in the classical AI era.

  3. The Dartmouth-based Turing Tests for creativity stipulate that machines have to generate category-specific creative output: Shakespearean or Petrarchan sonnets for Poetix, limericks for LimeriX, children’s stories for DigiKidLit, music in the style of a human composer for Style or Free Composition, and improvised music with a human performer for Improvisation. The creative output of these machines will be mixed among human creative output in the same category and human judges will be asked to label each creative output as generated by human or machine. Any machine-produced creative output that is indistinguishable from the human-produced creative output in the same category would have passed the Turing Test for creativity.

  4. Harnad (1992) makes a similar case in his defense of the Turing Test as an empirical criterion that addresses the attributional rather than the definitional problem of (conversational) intelligence. I have merely extended his argument to the Dartmouth-based Turing Tests for creativity. In short, the governing idea is this: however creativity might (ultimately) be defined, if the creative output of machines and the creative output of humans are indistinguishable by human experts, then we cannot attribute the mental states of creativity and creative uses of the imagination to humans while refraining from attributing these same mental states to machines.

  5. Given these (and other) intellectual inclinations, George Dyson (1997, p. 7) has seen fit to characterize Hobbes as the ‘patriarch of artificial intelligence’. One should note that this metaphor of the mechanical body (with automatons and clocks serving as the exemplary machines for Hobbes) is introduced in Leviathan with the intention of shedding light on the notion of the body politic.

  6. As a limiting case, one might think of consciousness, intelligence, thinking, and creative uses of the imagination as emergent properties that spring forth from an appropriately sophisticated level of hardware and software organization. See LaChat (1986).

  7. The concept of multiple realizability, first introduced by Putnam (1967) and later receiving its current expression from Lewis (1972), has vexed many philosophers of mind. Attempts have been made, vis-à-vis the multiple realizability thesis, to demonstrate that some version of a mind–brain identity theory is still viable. For a relatively recent discussion, see Polger (2002).

  8. For anecdotal evidence of how problem-solving algorithms have creatively subverted the expectations and intentions of their human AI designers and researchers and produced unexpected results, see Lehman et al. (2018).

  9. Besides (and as things stand), there remains a worrying lack of consensus about assessment criteria for creativity. Having addressed this elsewhere (cf. Chen 2018), it is not my intention to revisit my arguments here.

  10. Defenders of the Easy Dupe Objection are effectively denying that the Turing Tests for creativity provide an adequate empirical criterion for generating human-scale creative performance (or creative output-generating) capacity. What is further implied is that only human beings remain capable of possessing mental states, whether we might have in mind the mental state of creativity or the mental state of being easily duped.

  11. Weizenbaum (1966) provides a similar objection to the Turing Test. More specifically, Weizenbaum’s ELIZA program has deceived people into mistaking it for a human being, despite the fact that ELIZA is able to engage a person in a conversation while not understanding anything about what is being said.

References

  • Brooks RA (1990) Elephants don’t play chess. Robot Auton Syst 6:3–15

    Article  Google Scholar 

  • Chen M (2018) Criterial problems in creative cognition research. Philos Psychol 31(3):368–382

    Article  Google Scholar 

  • Dyson G (1997) Darwin among the machines: the evolution of global intelligence. Helix Books, New York

    Google Scholar 

  • Harnad S (1992) The Turing test is not a trick: Turing indistinguishability is a scientific criterion. SIGART Bull 3(4):9–10

    Article  Google Scholar 

  • Hobbes T (1651) Leviathan, ed. & intro. Basil Blackwell, Michael Oakeshott

    Google Scholar 

  • Kim J (1996) Philosophy of mind. Westview Press, Boulder

    Google Scholar 

  • LaChat M (1986) Artificial intelligence and ethics: an exercise in the moral imagination. AI Mag 7(2):70–79

    Google Scholar 

  • Lehman J, Clune J, Misevic D, Adami C, Beaulieu J, Bentley PJ, Bernard S, Beslon G, Bryson DM, Chrabaszcz P, Cheney N, Cully A, Doncieux S, Dyer FC, Ellefsen KO, Feldt R, Fischer S, Forrest S, Frénoy A, Gagné C, Goff LL, Grabowski L, Hodjat B, Hutter F, Keller L, Knibbe C, Krcah P, Lenski RE, Lipson H, MacCurdy R, Maestre C, Miikkulainen R, Mitri S, Moriarty DE, Mouret JB, Nguyen A, Ofria C, Parizeau M, Parsons D, Pennock RT, Punch WF, Ray TS, Schoenauer M, Shulte E, Sims K, Stanley KO, Taddei F, Tarapore D, Thibault S, Weimer W, Watson R, Yosinksi J (2018) The surprising creativity of digital evolution: a collection of anecdotes from the evolutionary computation and artificial life research communities. arxiv:1803.03453

  • Lewis D (1972) Psychophysical and theoretical identifications. Aust J Philos 50:249–58

    Google Scholar 

  • Lovelace A (1953) Notes on Manabrea’s Sketch of the Analytical Engine Invented by Charles Babbage’. In: Bowden BV (ed) Faster than thought. Sir Isaac Pitman & Sons, London

    Google Scholar 

  • Mahadevan S (2018) Imagination machines: a new challenge for artificial intelligence. AAAI Conference, link available at https://people.cs.umass.edu/~mahadeva/papers/aaai2018-imagination.pdf. Accessed 17 May 2018

  • Minsky M (2006) The emotion machine: commonsense thinking, artificial intelligence, & the future of the human mind. Simon & Schuster, New York

    Google Scholar 

  • Nichols S, Stephen S (2003) Mindreading: an integrated account of pretense, self-awareness & understanding other minds. Oxford University Press, Oxford

    Book  Google Scholar 

  • Nilsson N (1995) Eye on the prize. AI Mag 16(2):9–17

    Google Scholar 

  • Nilsson N (2010) The quest for artificial intelligence. Cambridge University Press, Cambridge

    Google Scholar 

  • Penrose R (1989) The emperor’s new mind. Oxford University Press, Oxford

    Google Scholar 

  • Polger T (2002) Putnam’s intuition. Philos Stud 109(2):143–70

    Article  Google Scholar 

  • Putnam H (1967) Psychological predicates. In: Capitan WH, Merrill DD (eds) Art, mind, & religion. University of Pittsburgh Press, Pittsburgh, pp 37–48

    Google Scholar 

  • Turing AM (1950) Computing machinery and intelligence. Mind 59(236):433–60

    Article  MathSciNet  Google Scholar 

  • Weinberg J, Meskin A (2006) Puzzling over the imagination: philosophical problems, architectural solutions. In: Nichols S (ed) The architecture of the imagination: new essays on pretence, possibility, & fiction. Oxford University Press, Oxford, pp. 175–202

    Google Scholar 

  • Weizenbaum J (1966) ELIZA—a computer program for the study of natural language communication between man and machine. Commun ACM 9(1):36–45

    Article  Google Scholar 

Download references

Curmudgeon Corner

Curmudgeon Corner is a short opinionated column on trends in technology, arts, science and society, commenting on issues of concern to the research community and wider society. Whilst the drive for super-human intelligence promotes potential benefits to wider society, it also raises deep concerns of existential risk, thereby highlighting the need for an ongoing conversation between technology and society. At the core of Curmudgeon concern is the question: What is it to be human in the age of the AI machine? -Editor.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Melvin Chen.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, M. Imagination machines, Dartmouth-based Turing tests, & a potted history of responses. AI & Soc 35, 283–287 (2020). https://doi.org/10.1007/s00146-018-0855-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-018-0855-3

Keywords

Navigation