Skip to main content

On the epistemological foundations of cognitive science

  • Chapter
Book cover Rethinking Cognitive Theory
  • 32 Accesses

Abstract

For those who believe that the best work in artificial-intelligence research promises to deliver a coherent and decidable ‘theory of mind’, the following remarks from a prominent practitioner may have a salutary effect:

[T]he problem is that a unique abstract characterization of man’s cognitive functioning does not exist. … The fact that it is not possible to uniquely determine cognitive structures and processes poses a clear limitation on our ability to understand the nature of human intelligence. I once thought it could mean unique identification of the structures and processes underlying cognitive behavior. Since that is not possible, I propose that we take ‘understanding the nature of human intelligence’ to mean possession of a theory that will enable us to improve human intelligence.1

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

  1. John R. Anderson, Language, Memory and Thought (New York: LEA/John Wiley, 1976) pp. 15–16.

    Google Scholar 

  2. Jerry A. Fodor, The Language of Thought (New York: T. Crowell, 1975) p. 33. (Hence L of T).

    Google Scholar 

  3. Aaron V. Cicourel, Cognitive Sociology (Harmondsworth: Penguin, 1973).

    Google Scholar 

  4. Hilary Putnam, ‘Minds and Machines’ in S. Hook, (ed.), Dimensions of Mind (London: Collier-Macmillan, 1960);

    Google Scholar 

  5. ‘Brains and Behavior’ in R.J. Butler (ed.), Analytical Philosophy, vol. 2 (Oxford: Blackwell, 1965), and

    Google Scholar 

  6. ‘The Mental Life of Some Machines’ in H. -N. Castaneda (ed.), Intentionality, Minds and Perception (Detroit: Wayne State University Press, 1966). For an excellent Wittgensteinian counterpoint to these arguments, although not addressed to Putnam’s work in particular, see J.F.M. Hunter, ‘Wittgenstein and Materialism’, Mind, vol. 86, no. 344, October 1977.

    Google Scholar 

  7. J.A. Fodor, Psychological Explanation (New York: Random House, 1968) p. 45. It is worth pausing to consider, in this connection, Fodor’s following argument: ‘This is to say, in effect, that whether actions whose definition requires reference to the motives, reasons, or intentions of the agent can be causally explained depends upon whether physiologically sufficient conditions for having motives, reasons, and intentions can be specified.’ (Ibid., italics added). The set of states of affairs properly characterisable in terms of someone’s having any of an indefinite set of particular motives, reasons and intentions must itself be indefinite, and must contain reference to circumstantial matters quite distinct from the physiological: e.g., the tone of voice which contextually gives away an intention to do something, the diary entries detailing the preparations to poison someone betraying someone’s motive, etc. The prospect of success for a regimentation of the antecedent physiological conditions of a human nervous system in respect of any of these occasions of ascription and/or avowal of motives, reasons or intentions is slim indeed. And why should they have any explanatory force whatever, unless they are supposed to correlate with ‘having a reason for an action’ falsely construed as a mental state. I may be in any number of ‘states’ quite independently of fulfilling the ascription criteria-in-context for ‘having a reason’ to do something (e.g., when your action gives me a reason to get angry with you).

    Google Scholar 

  8. Bernard Harrison, Meaning and Structure: An Essay in the Philosophy of Language (New York: Harper & Row, 1972), p. 124.

    Google Scholar 

  9. P.F. Strawson, Individuals: An Essay in Descriptive Metaphysics (London: Methuen, 1959). Endorsement of this central thesis does not commit me to endorsing every step in Strawson’s defence of it.

    Book  Google Scholar 

  10. J.F.M. Hunter, ‘On How We Talk’ in his Essays After Wittgenstein (Toronto: University of Toronto Press, 1973), p. 168.

    Google Scholar 

  11. Ibid.

    Google Scholar 

  12. Ibid.

    Google Scholar 

  13. Fodor often stresses his ‘literalness’; he is, after all, propounding a series of statements contributing to a scientific theory in his self-description. His text features various stipulations about scientificity, many deriving from a mechanistic epistemology (in his favouring of deterministic theories of human action) and a stubbornly behaviouristic reading of Ryle and even Wittgenstein. (Perhaps the first of Fodor’s fellow cognitivists to point out the inadequacies in this treatment of Ryle, although little is said of the later Wittgenstein, was D.C. Dennett in his (partly critical) review of Fodor’s The Language of Thought. See Dennett, ‘A Cure for the Common Code?’ in his Brainstorms (Vermont: Bradford Books, 1978); ‘Ryle does not attempt, as Skinner does, to explicate mentalistic predicates ‘[just] in terms of stimulus and response variables’ (L of T., p. 8). On the contrary, his explications are typically replete with intentionalist idioms.’ (Dennett, p. 95).) On Fodor’s ‘literalness’, see L of T., p. 76.

    Google Scholar 

  14. Ibid., p. 29. Fodor, like Richard Gregory in his famous Eye and Brain (London: Weidenfeld & Nicholson, 1977 edn), esp. pp. 13–14, favours a neo-Helmholtzian view of perception in terms of ‘unconscious inferences’. It is against this sort of view of perception that Ryle developed, in various places, his attack on the ‘intellectualist legend’.

    Google Scholar 

  15. Ibid., p. 28. For a useful discussion of this issue, see Richard Rorty, ‘Wittgensteinian Philosophy and Empirical Psychology’, Philosophical Studies, vol. 31, no. 3, 1977. Rorty’s critique of Fodor’s neo-Helmholtzian version of recognising the ‘same’ in the ‘different’ is cogent, even though he proceeds in this article to endorse Dodwell’s version of a computer-analogical psychofunctionalism. In his Philosophy and the Mirror of Nature, Rorty appears to endorse the general theoretical framework which Fodor introduces in his Language of Thought for cognitive studies, although it is a brief discussion in which contrary views are not themselves raised against these latest formulations by Fodor which, I think, constitute elaborations and developments of the basic themes of his earlier Psychological Explanation, especially its support for neo-Helmholtzian and Chomskian themes in perception and languageuse studies.

    Google Scholar 

  16. Ibid., p. 74, n. 15 (Italics in original). Here, Fodor attempts to circumvent the objection that analytically specified rules are arrived at by independent methods of purposeful codification, bearing an utterly unknown resemblance to whatever rules a speaker might genuinely ‘know’ and thus could only be ‘in accord’ with what they say and do. (All of this derives from Fodor’s uncritical acceptance of Chomsky’s notion of ‘unconscious mental representations of rules of grammar’. For an excellent critique of this and related views of Chomsky, see David E. Cooper, Knowledge of Language (New York: Humanities Press, 1975).

    Google Scholar 

  17. Ibid., p. 73.

    Google Scholar 

  18. Norman Malcolm, ‘Thinking’ in E. Leinfeller et al., (eds), Wittgenstein and his Impact on Contemporary Thought, Proceedings of the Second International Wittgenstein Symposium (Vienna: Holder-Pichler-Tempsky, 1978), pp. 415–16.

    Google Scholar 

  19. For a cognate, though slightly differing, version of the computational theory of action and cognition, see Zenon Pylyshyn’s essays, ‘Mind, Machines and Phenomenology’, Cognition, vol. 3, no. 1, 1974–5 and

    Google Scholar 

  20. ‘Computation and Cognition: Issues in the Foundation of Cognitive Science’, The Behavioral and Brain Sciences, vol. 3, no. 1, March 1980. (A Special Issue on Foundations of Cognitive Science, with two other major papers from Chomsky and Fodor respectively).

    Google Scholar 

  21. Michael Polanyi, Personal Knowledge (Phoenix: University of Chicago Press, 1958) ch. 4.

    Google Scholar 

  22. G.P. Baker and P.M.S. Hacker, Wittgenstein: Understanding and Meaning, vol. 1 (Oxford: Basil Blackwell/Chicago: University of Chicago Press, 1980) p. 276. (See the entire discussion of ‘understanding new sentences’, pp. 274–9).

    Google Scholar 

  23. See, inter alia, Margaret Boden’s Artificial Intelligence and Natural Man (New York: Basic Books, 1977), and

    Google Scholar 

  24. Joseph Weizenbaum’s Computer Power and Human Reason (San Francisco: W.H. Freeman & Co., 1976).

    Google Scholar 

  25. On his SHRDLU program, see Terry Winograd, Understanding Natural Language (New York: Academic Press, 1972).

    Google Scholar 

  26. On his ACT program, see J.R. Anderson, Language, Memory and Thought (New York: LEA/Wiley, 1976). On his ELIZA program, see Weizenbaum. On his STUDENT program, see D.G. Bobrow, ‘Natural Language Input for a Computer Problem-Solving System’ in M. Minsky (ed.), Semantic Information Processing (Cambridge, Mass.: M.I.T. Press, 1968).

    Google Scholar 

  27. Daniel C. Dennett, Brainstorms: Philosophical Essays on Mind and Psychology (Vermont: Bradford Books, 1978), p. 107.

    Google Scholar 

  28. Ibid.

    Google Scholar 

  29. Ibid., p. 105.

    Google Scholar 

  30. Ibid., p. xix.

    Google Scholar 

  31. Ibid., p. xvii.

    Google Scholar 

  32. Ibid., p. xx. Dennett writes: ‘the attribute, being-in-pain, is not a well-behaved theoretical attribute’. But whoever thought that it was?

    Google Scholar 

  33. Ibid.

    Google Scholar 

  34. C.E. Shannon and W. Weaver, The Mathematical Theory of Communication (Urbana: University of Illinois Press, 1949).

    Google Scholar 

  35. L. Brillouin, Science and Information Theory (New York: Academic Press, 1956). Cited in Churchland, ‘Mind-Brain Research’.

    Google Scholar 

  36. L. Weiskrantz (from Proc. R. Soc. B., 171, 1968, p. 336) as cited in Ragnar Granit, The Purposive Brain (Cambridge, Mass: M.I.T. Press, 1977), p. 206.

    Google Scholar 

  37. J.Y. Lettvin, H. Maturana, W.S. McCulloch and W.H. Pitts, ‘What the Frog’s Eye Tells the Frog’s Brain’, Proceedings, IRE, vol. 47, 1959.

    Google Scholar 

  38. Michael A. Arbib, The Metaphorical Brain (New York: John Wiley, 1972), p. 45.

    Google Scholar 

  39. J. Coulter, ‘Theoretical Problems of Cognitive Science’, Inquiry, (vol. 25, 1982). Dennett argues for the logicality of conceiving of mental images as information-bearing neural structures. He claims that when people have what to them are ‘mental images’, they are merely to be described as believing that they have such things. Calling peoples’ avowals and subsequent other conduct following the having of a mental image ‘B-manifolds’, Dennett argues that if these belief-manifolds ‘turn out to be caused by things in the brain lacking the peculiar features of images, then the scientific iconophobe will turn out to be right, and we will have to say that that person’s B-manifolds are composed of (largely) false beliefs, what one might call systematically illusory beliefs.’ (Dennett, Brainstorm, p. 187). I have argued (‘Theoretical Problems of Cognitive Science’) that mental-image avowals are not belief-avowals in logical grammar, and that Dennett’s eliminative materialist gambit of so reconceiving of them is ab initio ungrounded and arbitrary. Moreover, it subserves an incoherent account in which brain events/functions, which may well enable us to have mental images, are identified with such images. This is as mistaken as proposing that because it is our vocal cords which enable us to produce utterances when we use them, therefore our vocal cords are identical to our utterances. On what grounds could Dennett justify a conceptual move from characterising something as a ‘cause’ of X to characterising that thing (thereby) as identical to X?

    Google Scholar 

  40. John R. Searle, ‘Minds, Brains and Programs’, The Behavioral and Brain Sciences, vol. 3, no. 3, September 1980, pp. 417–57 (including 27 brief commentaries, and the author’s response, ‘Intrinsic Intentionality’). Searle’s argument appears to me to rest upon the claim that experiential talk about persons (by persons) is based upon an ‘emergent’ property of their physico-chemical make-up, viz., their capacity for having experiences. Their experiential life is not explicable even if we assume their conduct and ‘cognition’ to instantiate a program, since a distinction may be drawn between manipulating uninterpreted formal symbols according to a set of directions converted into a set of electronic (or mechanical) determinants (as in artificial-intelligence computer simulations of human speech, and in machine-translation computers), on the one hand, and attaching a semantic content to the symbols (e.g., being able to visualise for oneself the referents of some of them, being able to use them in new contexts and utterances quite spontaneously, etc.), on the other. Simulations of experiencers do not themselves experience. I should add that I think some elements of Searle’s argument are very convincing, but I have reservations about his assertion that it makes perfect sense to say ‘my brain understands English’ (ibid., p. 451); for some arguments about brains’ ‘recognising’ and ‘thinking’ or ‘having thoughts’, see my ‘The Brain as Agent’, Human Studies, vol. 2, no. 4, October 1979).

    Article  Google Scholar 

  41. Lynne Rudder Baker, ‘Why Computers Can’ t Act’, American Philosophical Quarterly, vol. 18, no. 2, April 1981. Shewrites: ‘Thus, acrucial difference between machines and self-conscious beings is this: for self-conscious beings there is an irreducible distinction between genuine self-consciousness and consciousness of someone-who-is-in-fact-myself; for machines, on the other hand, there is no corresponding distinction between say, genuine self-scanning and scanning a unit-which-is-in-fact-itself — just as in the case of self-defrosting refrigerators, there is no distinction between genuine self-defrosting and defrosting a refrigerator-which-is-in-fact itself.’ (p. 162). I find the arguments in this paper utterly ingenious and compelling.

    Google Scholar 

  42. For some details, see Margaret Boden, Artficial Intelligence and Natural Man (New York: Basic Books, 1977), pp. 434–44.

    Google Scholar 

  43. J.F.M. Hunter, ‘“Forms of Life” in Wittgenstein’s Philosophical Investigations’ in E.D. Klemke (ed.), Essays on Wittgenstein (Urbana: University of Illinois Press, 1971), p. 285.

    Google Scholar 

  44. For example, Ulrich Neisser, in his Cognitive Psychology (New York: Appleton, 1967).

    Google Scholar 

  45. Ludwig Wittgenstein, Philosophical Investigations, trans. G.E.M. Anscombe (Oxford: Basil Blackwell, 1968), para. 377.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Copyright information

© 1983 Jeff Coulter

About this chapter

Cite this chapter

Coulter, J. (1983). On the epistemological foundations of cognitive science. In: Rethinking Cognitive Theory. Palgrave Macmillan, London. https://doi.org/10.1007/978-1-349-06706-0_2

Download citation

Publish with us

Policies and ethics