Skip to main content

Representation, Cognition and Self: What Hope for an Integration of Psychology and Sociology?

  • Chapter

Part of the book series: Sociology of the Sciences a Yearbook ((SOSC,volume 13))

Abstract

What are the prospects for the integration of the psychology and the sociology of science? Leaving aside (for now) the question of whether or not such integration is desirable, this paper asks whether or not these disciplines are compatible. I suggest the answer depends critically on what we take to be the strategic value of the sociology of science and, in particular, on our attitude to the concepts of representation, cognition and self.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

  1. An earlier version of this paper was presented at the University of Colorado at Boulder, 24th November 1987. My thanks for participants’ comments then and since.

    Google Scholar 

  2. Elsewhere I have examined the effects of the observation that the ‘hardness’ of Collins’ (1981) hardest possible case is itself a local accomplishment/construction (Woolgar, 1988d).

    Google Scholar 

  3. This notion of representation is “the original sin of language, that separation of speech and world we know as the disjunciton of words and things” (Tyler, 1987).

    Google Scholar 

  4. cf Woolgar (1988a: Chap. 2).

    Google Scholar 

  5. Notably, including those representations which sustain its own discipline. See the recent work on reflexivity by Ashmore (1985), Mulkay (1985) and Woolgar (1988c).

    Google Scholar 

  6. For other authoritative statements on the matter, some of which include their own self-deconstruction, see Woolgar and Ashmore (1985), Mulkay (1985) and Woolgar (1988c).

    Google Scholar 

  7. Indeed, there is a good argument that as soon as you recognise that everything is social, then the term becomes redundant as a discriminator. To say something is social may have a useful antagonistic function, but it also implies that it is possible to have something which is non-social. This leads to the suggestion that the use of the terms’ social’ and ‘cognitive’ should simply be banned (Latour and Woolgar, 1986).

    Google Scholar 

  8. The Maaf is a device for injecting ethnographic distance, a stranger’s perspective designed to highlight what we otherwise take for granted about the world we both live in and try to investigate. It also permits one to put one’s own ignorance to good use by beginning at the beginning: my argument is very much that of a distanced newcomer to the phenomenon of cognition.

    Google Scholar 

  9. Maaf’s (initial) concentration on Western cosmology prevented him (as it did most Westerners) from discovering major divergences from this orthodoxy on the part of certain obscure and ‘primitive’ non-Western tribes. For example, some Western anthropologists knew that the Orikuya construe mind as located in their left elbows, that the Preenash think of their essential selves (samos as) as residing in the nearest akhran tree, and so on.

    Google Scholar 

  10. Feuerbach has been credited with noting that man creates technology in his own image. However, the point here is not simply that the nature of technology is driven by conceptions of the human state; rather, that in the development of (and discussion about) technology, prevailing conceptions of the human state and technological capacity are being renegotiated. This approach has some affinities with Hughes (1987) when he refers to “institutional structures that nurture and mirror the characteristics of the technical core of the (technological) system”. If we allow that these “institutional structures” embody competing notions about man’s uniqueness, his intellectual capacities and so on, it is perhaps not surprising that AI is so controversial. The exact technical characteristics of this particular technology are much disputed precisely because institutionalised notions about the character of man are at stake. The present approach also has some resonances with Callon’s (1987) argument that technology is a “tool for sociological analysis”. A close examination of discussions of the character of technology — what it is, what it can and can not do — shows that discussants themselves perform “societal analyses”. In the present case, “analyses” performed by discussants of AI constitute claims about man’s basic character, his abilities and capacities. By following protagonists in the AI controversy, we follow the construction and reconstruction of different models of man’s basic attributes.

    Google Scholar 

  11. The strategy of invoking a “reserve” category of attributes has in general been relatively unsuccessful in averting fears about the erosion of man’s uniqueness by the creation of machines. Attempts to establish alternative criteria of uniqueness have not easily overcome deeply held preconceptions about the special character of man. Certainly, this kind of argument did not prevent considerable alarm over the ever increasing capacity of devices to mimic man’s mechanical abilities — consider the climate in which Butler published his marvellous parody — nor did it prevent speculation and considerable controversy about the fundamentally mechanical character of all human action — for example, the discussions of La Mettrie’s (1784) work by Needham (1928), Rignano (1926), Rosenfeld (1941) and Vartanian (1960).

    Google Scholar 

  12. A main aim of AI is the design and construction of machines to perform tasks assumed to be associated with some cognitive (sometimes “intellectual”) ability. This is not, however, a uniform position within AI. Some-AI researchers flatly declare their disinterest in “cognitive abilities”. For them, the machine’s performance of tasks is the sole technical goal of their research, independent of whether or not such task performance would require intelligence in a human. Nonetheless, others, both practitioners within AI and spokesmen on its behalf (philosophers, marketing entrepreneurs), explicitly see the raison d’etre of AI as the attempt to design machine activity which mimics what they construe to be “cognitive” behaviour. This latter position is referred to as the “strong” version of AI (Searle, 1980).

    Google Scholar 

  13. This is, of course, to reformulate Butler’s language in terms appropriate to our current (sociological) interests.

    Google Scholar 

  14. The fact that “awkwardnesses fail to materialise” suggest an interesting reflexive tie between categories of “machine” and “non-machine predicate”. For example, in saying that a space shuttle is behaving perfectly, we provided for a hearing of the something-more-than-a-mere-machine quality of the space shuttle; in this usage, its “behaviour” can be heard as connoting its sophisticatedly technological (intelligent?) qualities: much more than a lump of metal.

    Google Scholar 

  15. More generally, this suggests the intriguing idea that we might try to turn any philosophical assumptions into a technology as a way of revealing their limits. If we need to assess the value of a philosophical argument, the acid test will be whether or not a technology founded on its basic assumptions can be demonstrably successful (cf Dreyfus, 1979). Unfortunately, the flaw in this suggestion is that “acid” or “critical” tests are rarely so straightforward: the sociology of scientific knowledge shows us the sense in which the outcome of tests and demonstrations are always contingent on temporary (cf Woolgar, 1986).

    Google Scholar 

References

  • M. Ashmore (1985) A question of Reflexivity: Wrighting the Sociology of Scientific Knowledge D. Phil Dissertation, University of York. To be published by University of Chicago Press.

    Google Scholar 

  • G Bachelard (1953), Le materialism rationnel (Paris: P. U. F.).

    Google Scholar 

  • W. Bijker, T. Pinch and T. Hughes (eds.), The Social Construction of Technological Systems: new directions in the sociololgy and history of technology (Cambridge, Mass.: M.I.T. Press, 1987)

    Google Scholar 

  • D. Bloor (1976) Knowledge and Social Imagery (London: Routledge and Kegan Paul)

    Google Scholar 

  • S. Butler (1970, originally 1872) Erewhon (Harmondsworth: Penguin).

    Google Scholar 

  • M. Callon (1987), ‘Technoloogy as a tool for Sociological Analysis’ in Bijker et al (1987), p. 83-103.

    Google Scholar 

  • H. Collins (1981) ‘Introduction’ to Knowledge and Controversy: studies in modem natural science. Special issue of Social STudies of Science 11 (1).

    Google Scholar 

  • J. Coulter (1979) The Social Construction of Mind (London: Macmillan).

    Google Scholar 

  • J. Coulter (1983) Rethinking Cognitive Theory (London: Macmillan).

    Google Scholar 

  • H. Dreyfus, Alchemy and Artificial Intelligence (Santa Monica, California: Rand Corporation, 1965).

    Google Scholar 

  • H. Dreyfus (1979), What Computers Can’t Do: the limits of artificial intelligence (New York: Basic Books, 2nd Edn.).

    Google Scholar 

  • H. Dreyfus and S. E. Dreyfus, Mind Over Machine: The power of human intuition and expertise in the era of the computer (Oxford: Blackwell, 1986).

    Google Scholar 

  • E. A. Feigenbaum and P. McCorduck, The Fifth Generation (London: Addison-Wesley, 1983).

    Google Scholar 

  • S. Fuller, M. de Mey, T. Shinn and S. Woolgar (eds.) The Cognitive Turn: Sociological and Psychological Perspectives on Science (Dordrecht: D. Reidel, 1989).

    Google Scholar 

  • T Hughes (1987), The Evolution of Large Scale Technical Systems in Bijker et al (1987), p. 51-82.

    Google Scholar 

  • K. Knorr-Cetina (1981) The Manufacture of Knowledge: an essay on the constructivist and contextual nature of science (Oxford: Pergammon).

    Google Scholar 

  • B. Latour and S. Woolgar (1986) Laboratory Life: the construction of scientific facts (2nd edition. Princeton: Princeton University Press).

    Google Scholar 

  • M. Lynch and S. Woolgar (eds.) (1988) Representation in Scientific Practice. Special issue of Human Studies vol. 11 no. 2/3.

    Google Scholar 

  • La Mettrie (1784) L’homme Machine (Leiden).

    Google Scholar 

  • M. J. Mulkay (1985) The Word and The World: Explorations in the Form of Sociological Analysis (London: George Allen and Unwin).

    Google Scholar 

  • P. McCorduck (1979) Machines Who Think (San Francisco: Freeman)

    Google Scholar 

  • D. Michie and R. Johnson, The Creative Computer: machine intelligence and human knowledge (Harmondsworth: Penguin, 1985).

    Google Scholar 

  • J. Needham (1928) Man a Machine, in answer to a romantical and unscientific treatise written by Sig. Eugenio Rignano and entitled “Man Not A Machine” (New York).

    Google Scholar 

  • J. Potter (1988) ‘What is Reflexive about Discourse Analysis? The case of reading readings’ in Woolgar (ed) (1988), p. 37-54.

    Google Scholar 

  • E. Rignano (1926) Man Not a Machine: a study of the finalistic aspects of life (London).

    Google Scholar 

  • L. C. Rosenfeld (1941) From Beast-Machine To Man-Machine: Animal Soul in French Letters from Descartes to La Mettrie (New York).

    Google Scholar 

  • J. Searle (1980) ‘Minds, Brains and Programs’ The Behavioral and Brain Sciences vol. 3 p. 417–457.

    Article  Google Scholar 

  • L. A. Suchman (1987) Plans and Situated Action: the problem of human machine interaction (Cambridge: Cambridge University Press).

    Google Scholar 

  • S. Turkle (1984) The Second Self: Computers and the Human Spirit (New York: Simon and Schuster).

    Google Scholar 

  • S. Turner (this volume) ‘Tacit Knowledge and the Project of Computer Modelling Cognitive Processes in Science’ in S. Fuller et al (eds.) p.

    Google Scholar 

  • S. A. Tyler (1987) The Unspeakable: Discourse, Dialogue and Rhetoric in the Postmodern World (London: University of Wisconsin Press).

    Google Scholar 

  • A Vartanian (1960) La Mettrie’s L’Homme Machine: A study in the origins of an idea. (Princeton, New Jersey: Princeton University Press).

    Google Scholar 

  • J. Weizenbaum, Computer Power and Human Reason: From Judgement To Calculation (W. H. Freeman, San Francisco, 1976)

    Google Scholar 

  • S. Woolgar (1985), ‘Why Not a Sociology of Machines? The case of sociology and artificial intelligence’ Sociology vol. 19 (1985) November

    Google Scholar 

  • S. Woolgar (1986), ‘The Chips Are Now Down? ‘Nature vol. 324 (1986) 182–3.

    Article  Google Scholar 

  • S. Woolgar (1987), ‘Reconstructing Man and Machine: a note on sociological critiques of cognitivism’ in Biker et al (eds) (1987) p. 31-328.

    Google Scholar 

  • S. Woolgar (1988a) Science: The Very Idea (London: Tavistock/Horwood, 1988).

    Google Scholar 

  • S. Woolgar (1988b) Time and Documents in Researcher Interaction: some ways of making out what is happening in experimental science’ in Lynch and Woolgar (eds) (1988) p. 171-200.

    Google Scholar 

  • S. Woolgar (ed) (1988c) Knowledge and Reflexivity: new frontiers in the sociology of knowledge (London: Sage).

    Google Scholar 

  • S. Woolgar (1988d) ‘The turn to technology in the social studies of science’ Working paper, presented to Edinburgh University Research Centre for the Social Sciences, 18 Jan 1988.

    Google Scholar 

  • S. Woolgar (1989), ‘Representation, cognition, self: what hope for an integration of psychology and sociology’ in S. Fuller et al (eds.) (1989) p.

    Google Scholar 

  • S. Woolgar and M. Ashmore (1988) ‘The Next Step: an introduction to the reflexive project’ in Woolgar (ed) (1988c) 1-13.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1989 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Woolgar, S. (1989). Representation, Cognition and Self: What Hope for an Integration of Psychology and Sociology?. In: Fuller, S., de Mey, M., Shinn, T., Woolgar, S. (eds) The Cognitive Turn. Sociology of the Sciences a Yearbook, vol 13. Springer, Dordrecht. https://doi.org/10.1007/978-94-015-7825-7_11

Download citation

  • DOI: https://doi.org/10.1007/978-94-015-7825-7_11

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-90-481-4049-7

  • Online ISBN: 978-94-015-7825-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics