Toward “lay” participation and co-operative learning in TA, technology policy, and construction of technologies

  • Imre Hronszky
Part of the Wissenschaftsethik und Technikfolgenbeurteilung book series (ETHICSSCI, volume 11)


The paper first outlines some features of the historical development of TA, relevant to the problem of citizen, “lay” participation. Following this it makes some remarks on the changing role of technology in society. Then I make some remarks about models of political democracy and consequent preferred ways of organising cognition. This section also introduces the claim for “technological citizenship” as a normative political consideration for appropriate participation in processes of the political regulation of technological development. The idea of “technological citizenship” will be discussed with relation to raising questions about the requirements necessary for experts and participating citizens as “lay” persons making informed contributions in a participative Technology Assessment. The main part of the paper is a critical assessment of the cognitive potentials and limits of the “lay” public. As a conclusion the need and possibility of developing a mutual learning process will be emphasized.

An introductory note on the “moral” intended by the paper is also in order. Some recent actors in the TA arena still continue to play down the role of either the experts or the “lay” public as appropriate, essential cognitive actors in making TA. In doing so they support a move in a “downward spiral”, that is in a trajectory along which mutual “denigration” of the expert and the “lay” ruins any possibility of rational co-operation. The paper tries to show that both experts and the “lay” public are necessary actors if an “upward spiral”, a reinforcing co-operation and with this a higher level of rationality in TA issues is intended to be realised.

It is more and more acknowledged and the opinion reinforces that “lay” persons, the citizens, have a role in assessing technologies. However, arguing for the essential participation of “lay” persons already divides the parties. Recently the debate has moved to being more about whether they should be involved as political actors or also as, in some respect, essential actors in the cognitive process. My understanding also involves arguing for the essential cognitive role of the citizens in constructing TA advice rather than only having participation in policy decisions based upon TA results. Actually, this essential cognitive role should be a double role. It should first be a regular utilisation in making TA of some descriptive, local knowledge the citizens have. This can help, among other issues, to make the TA investigation work more adequately and effectively. Second, an essential role for citizens’ knowledge should be considered. Citizens may have a role in informing essential value choices for the cognitive process, in framing the cognitive process; what can be called a taking part in “cognition policy” in some general sense. (This requirement for participation in the framing process of expert cognition is different from the democratic political rights of the citizens of finally evaluating expert advice and making decisions.) It is important to notice that in neither of these roles is it claimed that the role of expert and citizen knowledge should simply be symmetrically ranked, even when they are both seen as essential. Together with their role in political choice concerning technological alternatives the two basic types of activities, forming and complementing expert cognition are the most important modes that citizens (should) have in informing technology policy.

Let me finish summarising the “moral” with two remarks. The first is that one has to differentiate between roles in principle and in the practical political arena where meaningful simplifications are not only preconditions for effective working but for its working at all. Hence, solving the problem of making a TA effectively workable through delegations or simplification of tasks is of the same importance as working on theoretical modelling of the TA process. Secondly, in some sort of political democracy, making TA just as constructing technologies should be seen as a sort of co-operative learning process. Two additional remarks are in order. With this emphasis on co-operative learning no ideal of moving toward even stronger consensus is claimed. Rather it is recognised that this is a process in which the dynamics sometimes lead to consensus, sometimes to clear limited dissensus only. A permissive democracy as a basic frame for unifying “Homo prospectus technologicus” with “Zoon politiken” should allow that the policing process essentially entails periodical opening up of any reached consensus, in the interest of reaching new ones through deliberating debates.


Technology Policy Citizen Participation Quantitative Risk Assessment Risky Situation Strategic Knowledge 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Collingridge D (1980) The Social Control of Technology. Open University Press, M. KeynesGoogle Scholar
  2. Cronberg T et al. (eds) (1991) Danish Experiment — Social Constructions of Technology. New Social Science Monographs, Institute of Organisation and Industrial Sociology Durant, CopenhagenGoogle Scholar
  3. Fischer F (1990) Technology and the Politics of Expertise. Sage, Newburz Park, London, New DelhiGoogle Scholar
  4. Frankenfeld PhJ (1992) A Normative Framework for Risk Studies. In: Science, Technology and Human Values. V17, N4, AutumnGoogle Scholar
  5. Habermas J (1996) Die Einbeziehung des Anderen. Suhrkamp, Frankfurt/MainGoogle Scholar
  6. Keeney et al. (1993) Decisions with Multiple Objectives. Wiley, New YorkGoogle Scholar
  7. Krohn W (1997) Die Innovationschancen partizipatorischer Technikgestaltung. In: Köberle S, Gloede F, Hennen L (eds) Diskursive Verstaendigung? Nomos, Baden-BadenGoogle Scholar
  8. McMullin E (1983) Values in Science. Philosophy of Science Association 1982, Asquith PD (ed), East Lansing, Phil. Of Sci. AssociationGoogle Scholar
  9. Nennen HU (1998) Das Expertendilemma: Ein Fazit. In: TA-Informationen, 3/98Google Scholar
  10. Perrow C (1984) Normal Accidents. Basic Books, New YorkGoogle Scholar
  11. Sclove RE (1995) Democracy and Technology. The Guilford Press, NYGoogle Scholar
  12. Shrader-Frechette KS (1991) Risk and Rationality. University of California Press, Berkeley, Los Angeles, OxfordGoogle Scholar
  13. Shrader-Frechette KS (1993) Burrying Uncertainty. Univ. of California Press, Berkeley, Los Angeles, LondonGoogle Scholar
  14. Stirling A (1999) ESTO Project Report On Science and Precaution in the Management of Technological Risk Raise. SevillaGoogle Scholar
  15. Thurow L (1999) Brainpower and the future of capitalism. In: Ruggles R, Holt-house D (eds) The Knowledge Advantage, Oxford, CapstoneGoogle Scholar
  16. Weinberg A (1972) “Science and Trans-Science”. Minerva, 10(2), April 1972Google Scholar
  17. Wynne B (1992) Uncertainty and Environmental Learning: reconceiving science and policy in the preventive paradigm. Global Environmental Change, 6 /92Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Imre Hronszky

There are no affiliations available

Personalised recommendations