Advertisement

Reputation and Competence in Publicly Funded Science: Estimating the Effects on Research Group Productivity

  • Ashish Arora
  • Paul A. David
  • Alfonso Gambardella
Chapter

Abstract

This paper estimates the “production function” for scientific research publications in the field of biotechnology. It utilises an exceptionally rich and comprehensive data set pertaining to the universe of research groups that applied to a 1989–1993 research programme in biotechnology and bio-instrumentation, sponsored by the Italian National Research Council, CNR. A structural model of the resource allocation process in scientific research guides the selection of instruments in the econometric analysis, and controls for selectivity bias effects on estimates based on the performance of funded research units. The average elasticity of research output with respect to the research budget is estimated to be 0.6; but, for a small fraction of groups led by highly prestigious Pls this elasticity approaches 1. These estimates imply, conditional on the distribution of observed productivity, that a more unequal distribution of research funds would increase research output in the short-run. Past research publication performance is found to have an important effect on expected levels of grant funding, and hence on the unit’s current productivity in terms of (quality adjusted) publications. The results show that the productivity of aggregate resource expenditures supporting scientific research is critically dependent on the institutional mechanisms and criteria employed in the allocation of such resources.

Keywords

Production Function Research Unit Research Output Marginal Product Past Performance 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Allison, P., Price, Solla, D. J. de, Griffith, B., Moravcsik, M., Stewart, J. (1976). –“Lotka’s Law: A Problem in its Interpretation and Application”, Social Studies of Sciences, Vol. 6, pp. 269–276.CrossRefGoogle Scholar
  2. Allison, P., Long, S., Krause, T. (1982). — “Cumulative Advantage and Inequality in Science”, American Sociological Review, Vol. 47(5), pp. 615–625.CrossRefGoogle Scholar
  3. Arora, A., Gambardella, A. (1997). — “Public Policy Towards Science: Picking Stars or Spreading the Wealth?”, Revue d’Économie Industrielle, N. 79, pp. 63–75.CrossRefGoogle Scholar
  4. Dasgupta, P., David, P. A. (1987). — “Information Disclosure and the Economics of Science and Technology”, in Feiwel, G. (ed.), Arrow and the Ascent of Modern Economic Theory, New York University Press, New York, pp. 519–542.Google Scholar
  5. Dasgupta, P., David, P. A. (1994). — “Towards a New Economics of Science”, Research Policy, Vol. 23, pp. 487–521.CrossRefGoogle Scholar
  6. David, P. A. (1993). — “Knowledge, Property and the System Dynamics of Technological Change”, in Proceedings of the World Bank Annual Conference on Development Economics: 1992, Summers, L. and Shah, S. (ed.), Washington DC, March.Google Scholar
  7. David, P. A. (1994). — “Positive Feedbacks and Research Productivity in Science: Reopening Another Black Box”, in Granstrand, O. (ed.) Economics of Technology, North-Holland, Amsterdam and London.Google Scholar
  8. European Report on Science and Technology Indicators (The) (1994), European Commission, DG XII, EUR 15897 EN, Luxembourg.Google Scholar
  9. Jaffe, A. (1989). — “Real Effects of Academic Research”, American Economic Review, Vol. 79, (5), pp. 957–970.Google Scholar
  10. Levin, S., Stephan, P. (1991). — “Research Productivity over the Life Cycle: Evidence for Academic Scientists”, American Economic Review, Vol. 81, (1), pp. 114–132.Google Scholar
  11. Loka, A. (1926). — “The Frequency Distribution of Scientific Productivity”, Journal of the Washington Academy of Sciences, Vol. 16, (12), pp. 317–323.Google Scholar
  12. Mansfield, E. (1991). — “Academic Research and Industrial Innovation”, Research Policy, Vol. 20, (1), pp. 1–12.CrossRefGoogle Scholar
  13. Merton, R. (1968). — “The Matthew Effect in Science”, Science, Vol. 159, (3810), pp. 56–63.CrossRefGoogle Scholar
  14. Narin, F., Olivastro, D. (1992). — “Status Report — Linkage between Technology and Science”, Research Policy, Vol. 21, (3), pp. 237–249.CrossRefGoogle Scholar
  15. Nelson, R. (1986). — “Institutions Supporting Technical Advance in Industry”, American Economic Review Proceedings, Vol. 76, (2), pp. 186–189.Google Scholar
  16. OECD (1994). — Main Science and Technology Indicators, OECD, Paris.Google Scholar
  17. Price, D. J. de Solla (1963). — Little Science, Big Science, Columbia University Press, New York.Google Scholar
  18. Price, D. J. de Solla (1976). — “A General Theory of Bibliometric and Other Cumulative Advantage Processes”, Journal of the American Society for Information Sciences, Vol. 27, (5/6), pp. 292–306.CrossRefGoogle Scholar
  19. Stephan, P., Levin, S. (1992). — Striking the Mother Lode in Science: The Importance of Age, Place and Time, Oxford University Press, New York.Google Scholar
  20. Stephan, P. (1996). — “The Economics of Science”, Journal of Economic Literature, Vol. XXXIV, pp. 1199–1235.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2000

Authors and Affiliations

  • Ashish Arora
    • 1
  • Paul A. David
    • 2
  • Alfonso Gambardella
    • 3
  1. 1.Heinz SchoolCarnegie Mellon UniversityUSA
  2. 2.Oxford UniversityUK
  3. 3.University of UrbinoItaly

Personalised recommendations