Abstract
What are the prospects for the integration of the psychology and the sociology of science? Leaving aside (for now) the question of whether or not such integration is desirable, this paper asks whether or not these disciplines are compatible. I suggest the answer depends critically on what we take to be the strategic value of the sociology of science and, in particular, on our attitude to the concepts of representation, cognition and self.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Notes
An earlier version of this paper was presented at the University of Colorado at Boulder, 24th November 1987. My thanks for participants’ comments then and since.
Elsewhere I have examined the effects of the observation that the ‘hardness’ of Collins’ (1981) hardest possible case is itself a local accomplishment/construction (Woolgar, 1988d).
This notion of representation is “the original sin of language, that separation of speech and world we know as the disjunciton of words and things” (Tyler, 1987).
cf Woolgar (1988a: Chap. 2).
Notably, including those representations which sustain its own discipline. See the recent work on reflexivity by Ashmore (1985), Mulkay (1985) and Woolgar (1988c).
For other authoritative statements on the matter, some of which include their own self-deconstruction, see Woolgar and Ashmore (1985), Mulkay (1985) and Woolgar (1988c).
Indeed, there is a good argument that as soon as you recognise that everything is social, then the term becomes redundant as a discriminator. To say something is social may have a useful antagonistic function, but it also implies that it is possible to have something which is non-social. This leads to the suggestion that the use of the terms’ social’ and ‘cognitive’ should simply be banned (Latour and Woolgar, 1986).
The Maaf is a device for injecting ethnographic distance, a stranger’s perspective designed to highlight what we otherwise take for granted about the world we both live in and try to investigate. It also permits one to put one’s own ignorance to good use by beginning at the beginning: my argument is very much that of a distanced newcomer to the phenomenon of cognition.
Maaf’s (initial) concentration on Western cosmology prevented him (as it did most Westerners) from discovering major divergences from this orthodoxy on the part of certain obscure and ‘primitive’ non-Western tribes. For example, some Western anthropologists knew that the Orikuya construe mind as located in their left elbows, that the Preenash think of their essential selves (samos as) as residing in the nearest akhran tree, and so on.
Feuerbach has been credited with noting that man creates technology in his own image. However, the point here is not simply that the nature of technology is driven by conceptions of the human state; rather, that in the development of (and discussion about) technology, prevailing conceptions of the human state and technological capacity are being renegotiated. This approach has some affinities with Hughes (1987) when he refers to “institutional structures that nurture and mirror the characteristics of the technical core of the (technological) system”. If we allow that these “institutional structures” embody competing notions about man’s uniqueness, his intellectual capacities and so on, it is perhaps not surprising that AI is so controversial. The exact technical characteristics of this particular technology are much disputed precisely because institutionalised notions about the character of man are at stake. The present approach also has some resonances with Callon’s (1987) argument that technology is a “tool for sociological analysis”. A close examination of discussions of the character of technology — what it is, what it can and can not do — shows that discussants themselves perform “societal analyses”. In the present case, “analyses” performed by discussants of AI constitute claims about man’s basic character, his abilities and capacities. By following protagonists in the AI controversy, we follow the construction and reconstruction of different models of man’s basic attributes.
The strategy of invoking a “reserve” category of attributes has in general been relatively unsuccessful in averting fears about the erosion of man’s uniqueness by the creation of machines. Attempts to establish alternative criteria of uniqueness have not easily overcome deeply held preconceptions about the special character of man. Certainly, this kind of argument did not prevent considerable alarm over the ever increasing capacity of devices to mimic man’s mechanical abilities — consider the climate in which Butler published his marvellous parody — nor did it prevent speculation and considerable controversy about the fundamentally mechanical character of all human action — for example, the discussions of La Mettrie’s (1784) work by Needham (1928), Rignano (1926), Rosenfeld (1941) and Vartanian (1960).
A main aim of AI is the design and construction of machines to perform tasks assumed to be associated with some cognitive (sometimes “intellectual”) ability. This is not, however, a uniform position within AI. Some-AI researchers flatly declare their disinterest in “cognitive abilities”. For them, the machine’s performance of tasks is the sole technical goal of their research, independent of whether or not such task performance would require intelligence in a human. Nonetheless, others, both practitioners within AI and spokesmen on its behalf (philosophers, marketing entrepreneurs), explicitly see the raison d’etre of AI as the attempt to design machine activity which mimics what they construe to be “cognitive” behaviour. This latter position is referred to as the “strong” version of AI (Searle, 1980).
This is, of course, to reformulate Butler’s language in terms appropriate to our current (sociological) interests.
The fact that “awkwardnesses fail to materialise” suggest an interesting reflexive tie between categories of “machine” and “non-machine predicate”. For example, in saying that a space shuttle is behaving perfectly, we provided for a hearing of the something-more-than-a-mere-machine quality of the space shuttle; in this usage, its “behaviour” can be heard as connoting its sophisticatedly technological (intelligent?) qualities: much more than a lump of metal.
More generally, this suggests the intriguing idea that we might try to turn any philosophical assumptions into a technology as a way of revealing their limits. If we need to assess the value of a philosophical argument, the acid test will be whether or not a technology founded on its basic assumptions can be demonstrably successful (cf Dreyfus, 1979). Unfortunately, the flaw in this suggestion is that “acid” or “critical” tests are rarely so straightforward: the sociology of scientific knowledge shows us the sense in which the outcome of tests and demonstrations are always contingent on temporary (cf Woolgar, 1986).
References
M. Ashmore (1985) A question of Reflexivity: Wrighting the Sociology of Scientific Knowledge D. Phil Dissertation, University of York. To be published by University of Chicago Press.
G Bachelard (1953), Le materialism rationnel (Paris: P. U. F.).
W. Bijker, T. Pinch and T. Hughes (eds.), The Social Construction of Technological Systems: new directions in the sociololgy and history of technology (Cambridge, Mass.: M.I.T. Press, 1987)
D. Bloor (1976) Knowledge and Social Imagery (London: Routledge and Kegan Paul)
S. Butler (1970, originally 1872) Erewhon (Harmondsworth: Penguin).
M. Callon (1987), ‘Technoloogy as a tool for Sociological Analysis’ in Bijker et al (1987), p. 83-103.
H. Collins (1981) ‘Introduction’ to Knowledge and Controversy: studies in modem natural science. Special issue of Social STudies of Science 11 (1).
J. Coulter (1979) The Social Construction of Mind (London: Macmillan).
J. Coulter (1983) Rethinking Cognitive Theory (London: Macmillan).
H. Dreyfus, Alchemy and Artificial Intelligence (Santa Monica, California: Rand Corporation, 1965).
H. Dreyfus (1979), What Computers Can’t Do: the limits of artificial intelligence (New York: Basic Books, 2nd Edn.).
H. Dreyfus and S. E. Dreyfus, Mind Over Machine: The power of human intuition and expertise in the era of the computer (Oxford: Blackwell, 1986).
E. A. Feigenbaum and P. McCorduck, The Fifth Generation (London: Addison-Wesley, 1983).
S. Fuller, M. de Mey, T. Shinn and S. Woolgar (eds.) The Cognitive Turn: Sociological and Psychological Perspectives on Science (Dordrecht: D. Reidel, 1989).
T Hughes (1987), The Evolution of Large Scale Technical Systems in Bijker et al (1987), p. 51-82.
K. Knorr-Cetina (1981) The Manufacture of Knowledge: an essay on the constructivist and contextual nature of science (Oxford: Pergammon).
B. Latour and S. Woolgar (1986) Laboratory Life: the construction of scientific facts (2nd edition. Princeton: Princeton University Press).
M. Lynch and S. Woolgar (eds.) (1988) Representation in Scientific Practice. Special issue of Human Studies vol. 11 no. 2/3.
La Mettrie (1784) L’homme Machine (Leiden).
M. J. Mulkay (1985) The Word and The World: Explorations in the Form of Sociological Analysis (London: George Allen and Unwin).
P. McCorduck (1979) Machines Who Think (San Francisco: Freeman)
D. Michie and R. Johnson, The Creative Computer: machine intelligence and human knowledge (Harmondsworth: Penguin, 1985).
J. Needham (1928) Man a Machine, in answer to a romantical and unscientific treatise written by Sig. Eugenio Rignano and entitled “Man Not A Machine” (New York).
J. Potter (1988) ‘What is Reflexive about Discourse Analysis? The case of reading readings’ in Woolgar (ed) (1988), p. 37-54.
E. Rignano (1926) Man Not a Machine: a study of the finalistic aspects of life (London).
L. C. Rosenfeld (1941) From Beast-Machine To Man-Machine: Animal Soul in French Letters from Descartes to La Mettrie (New York).
J. Searle (1980) ‘Minds, Brains and Programs’ The Behavioral and Brain Sciences vol. 3 p. 417–457.
L. A. Suchman (1987) Plans and Situated Action: the problem of human machine interaction (Cambridge: Cambridge University Press).
S. Turkle (1984) The Second Self: Computers and the Human Spirit (New York: Simon and Schuster).
S. Turner (this volume) ‘Tacit Knowledge and the Project of Computer Modelling Cognitive Processes in Science’ in S. Fuller et al (eds.) p.
S. A. Tyler (1987) The Unspeakable: Discourse, Dialogue and Rhetoric in the Postmodern World (London: University of Wisconsin Press).
A Vartanian (1960) La Mettrie’s L’Homme Machine: A study in the origins of an idea. (Princeton, New Jersey: Princeton University Press).
J. Weizenbaum, Computer Power and Human Reason: From Judgement To Calculation (W. H. Freeman, San Francisco, 1976)
S. Woolgar (1985), ‘Why Not a Sociology of Machines? The case of sociology and artificial intelligence’ Sociology vol. 19 (1985) November
S. Woolgar (1986), ‘The Chips Are Now Down? ‘Nature vol. 324 (1986) 182–3.
S. Woolgar (1987), ‘Reconstructing Man and Machine: a note on sociological critiques of cognitivism’ in Biker et al (eds) (1987) p. 31-328.
S. Woolgar (1988a) Science: The Very Idea (London: Tavistock/Horwood, 1988).
S. Woolgar (1988b) Time and Documents in Researcher Interaction: some ways of making out what is happening in experimental science’ in Lynch and Woolgar (eds) (1988) p. 171-200.
S. Woolgar (ed) (1988c) Knowledge and Reflexivity: new frontiers in the sociology of knowledge (London: Sage).
S. Woolgar (1988d) ‘The turn to technology in the social studies of science’ Working paper, presented to Edinburgh University Research Centre for the Social Sciences, 18 Jan 1988.
S. Woolgar (1989), ‘Representation, cognition, self: what hope for an integration of psychology and sociology’ in S. Fuller et al (eds.) (1989) p.
S. Woolgar and M. Ashmore (1988) ‘The Next Step: an introduction to the reflexive project’ in Woolgar (ed) (1988c) 1-13.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1989 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Woolgar, S. (1989). Representation, Cognition and Self: What Hope for an Integration of Psychology and Sociology?. In: Fuller, S., de Mey, M., Shinn, T., Woolgar, S. (eds) The Cognitive Turn. Sociology of the Sciences a Yearbook, vol 13. Springer, Dordrecht. https://doi.org/10.1007/978-94-015-7825-7_11
Download citation
DOI: https://doi.org/10.1007/978-94-015-7825-7_11
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-481-4049-7
Online ISBN: 978-94-015-7825-7
eBook Packages: Springer Book Archive