Abstract
Can computers think? Debates regarding this confused question fill the pages of the popular press and especially of that of the pseudo-scientific kind. It seems that with every technological advance achieved by researchers of artificial intelligence we are once again assured that we are just around the corner from witnessing computers who think and act like human beings, and of course surpass them in many respects. Prophecies of doom and of glory about the frightening or inspiring day when computers take over the world sell newspapers and movie tickets, and somewhat surprisingly, also sustain the careers of some dramatic academics and researchers.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The functionalist fallacy underlies many different reductionist tendencies that we will not address here, including, most prominently, the psychological school of behaviorism . Behaviorists advise us to abandon the study of intentions and their various inner representations and focus instead on the study of predictable behavioral regularities. Of course, without any understanding of intentions, behaviorists find themselves in a predicament similar to that of the actor who imitates someone’s behavior accurately in one context but does not know how to extend the imitation into another context. For the only thing that leads from regularities in a given context to their possible and unknown elaborations in other contexts is intention. As far as I am aware, Polanyi (1952) was the first to express this insight as a devastating criticism of the argument that computers can think. This was a particular application of his broader argument against the reduction of machines in general to their purely physical descriptions (Polanyi 1951, p. 21). Incidentally, Polanyi was a friend and colleague of Alan Turing, whose work is discussed below. Polanyi’s essay on cybernetics was written explicitly as a critique of the idea that a Turing machine succeeds in capturing something of the nature of human thought.
- 2.
See Popper (1950a, b), and especially the final paragraphs of part II, for a fascinating attempt to explain this point. The thought experiments proposed there, which concern the limits on the ability of predicting machines to predict their own future states, are vital background for Agassi’s outstanding examples of possible elaborations of Turing’s test (Turing machines teaching other Turing machines to identify Turing machines that impersonate humans, etc.,). A computer, Popper argues there, given enough time, can of course prove almost countless arithmetic truths, but it cannot tell the interesting ones from the uninteresting ones. Given, for example, that a computer has shown that 2 + 2 = 4, it will also prove that 2 + 2 is different from 4 + 1, different from 4 + 2, different from 4 + 3, and so on. Of course, we can try to program a computer to distinguish between interesting and uninteresting truths using an algorithm that expresses an assumption about how this might be done, but this does not solve the problem—it merely pushes the problem back to another meta-level, since our assumptions about what is deserving of interest change according to context and require context in order to have any sort of grounding. Suppose, Popper adds, that we have absent-mindedly entered inconsistent axioms into a computing machine. It will proceed to derive from these axioms almost countless valid theorems without realizing the utter futility of the task. But how, asks Popper, would this computing machine handle the context-sensitive question of how to fix the faulty system of axioms? And how would it distinguish between interesting and uninteresting solutions to this problem?
Bibliography
Agassi, J. (1988). Analogies hard and soft. In D. H. Helman (Ed.), Analogical reasoning, Synthese Library (Vol. 197, pp. 401–419). Dordrecht: Springer.
Bar-Am, N. (2012). Extensionalism in context. Philosophy of the Social Sciences, 42(4), 543–560.
Bar-Hillel, Y. (1964). Language and information: Selected essays on their theory and application. Reading (MA): Addison-Wesley.
Bar-Hillel, Y. (1970). Aspects of language: Essays in philosophy of language, linguistic philosophy, and methodology of linguistics. Jerusalem: Magnes.
Bunge, M. (2003). Emergence and convergence: Qualitative novelty and the unity of knowledge. Toronto: University of Toronto Press.
Polanyi, M. (1951). The logic of liberty. Chicago: The University of Chicago Press.
Polanyi, M. (1952). The hypothesis of cybernetics. The British Journal for the Philosophy of Science, 2(8), 312–315.
Popper, K. (1950a). Indeterminism in quantum physics and in classical physics: Part I. British Journal for the Philosophy of Science, 1(2), 117–133.
Popper, K. (1950b). Indeterminism in quantum physics and in classical physics: Part II. British Journal for the Philosophy of Science, 1(3), 173–195.
Quine, W. V. O. (2008). Confessions of a confirmed extensionalist. Cambridge (MA): Harvard University Press.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Bar-Am, N. (2016). A Note on the Intelligence of Computers. In: In Search of a Simple Introduction to Communication. Springer, Cham. https://doi.org/10.1007/978-3-319-25625-2_11
Download citation
DOI: https://doi.org/10.1007/978-3-319-25625-2_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-25623-8
Online ISBN: 978-3-319-25625-2
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)