Advertisement

Descriptive Versus Evaluative Bibliometrics

Monitoring and Assessing of National R&D Systems
  • Thed van Leeuwen

Abstract

This paper covers the differences between two separate bibliometric approaches, labelled ‘descriptive’ versus ‘evaluative’, or top down versus bottom up. The most important difference between these two approaches is found in the level of validity of the underlying research output. Whilst the publications in a top down approach, having a descriptive character, are collected by following general characteristics of these publications (such as country names, or fields), the consequence is that findings from such studies have a ‘meaning’ that is limited with respect to actual research assessment. On the other hand, in a bottom up approach the publications are collected from individual oeuvres of scientists, including a process of verification by the researchers involved. This procedure contributes significantly to the validity of the publication material, and consequently research assessment procedures can be based on the results of this type of bibliometric analyses. A strong focus in the paper will be on the actual application of bibliometric analysis within research assessment procedures, in particular within the UK and the Netherlands.

Keywords

Research Output Citation Analysis Bibliometric Analysis Bibliometric Indicator Research Assessment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Beller, F.K. (1999). Der Zusammenhang zwischen Index Medicus, dem Impact Factor und der deutschen Sprache. Geburtshilfe und Frauenheilkunde, 59, 53–56.Google Scholar
  2. Geuna, A., Martin, B.R. (2003). University Research Evaluation and funding: an international comparison. Minerva, 41, 277–304.CrossRefGoogle Scholar
  3. Haller, U., Hepp, H., Reinhold, E. (1997). Tötet der ‘Impact Factor’ die deutsche Sprache? Gynakologisch-geburtshilfliche Rundschau, 37, 117–118.Google Scholar
  4. Herfarth, C., Schurmann, G., (1996). Deutsche klinische Zeitschriften und der Impact Factor. Chirurg 67, 297–299.Google Scholar
  5. Hicks, D. (1999). The difficulty of achieving full coverage of international social science literature and the bibliometric consequences. Scientometrics, 44, 193–215.Google Scholar
  6. Jimenez-Contreras, E., Lopez-Cozar, E.D., Ruiz-Perez, R., Fernandez, V.M., (2002). Impact-factor rewards affect Spanish research. Nature, 417, 898–898.Google Scholar
  7. Kindermann G., (1999). Hat Deutsch noch Zukunft als Wissenschaftssprache? Geburtshilfe und Frauenheilkunde, 59, 188–190.CrossRefGoogle Scholar
  8. KNAW Commission, chaired by Professor van Bemmel “Kwaliteit verplicht”, report. Amsterdam, 2000.Google Scholar
  9. van Leeuwen T.N., Moed, H.F., Tijssen, R.J.W., Visser, M.S., van Raan, A.F.J., (2000). First evidence of serious language-bias in the use of citation analysis for the evaluation of National Science Systems. Research Evaluation, 9, 121–122.Google Scholar
  10. van Leeuwen T.N., Moed, H.F., Tijssen, R.J.W., Visser, M.S., van Raan, A.F.J., (2001). Language biases in the coverage of the Science Citation Index and its consequences for international comparisons of national research performance. Scientometrics, 51, 335–246.Google Scholar
  11. van Leeuwen, T.N., Visser, M.S., Moed, H.F., Nederhof, A.J., van Raan, A.F.J., (2003). The Holy Grail of Science Policy: exploring and combining bibliometric tools in search of scientific excellence. Scientometrics, 57, 257–280.Google Scholar
  12. Luwel, M., Moed, H.F., Nederhof, A.J., De Samblanx, V., Verbrugghen, K., van der Wurff, L.J., (1999). Towards indicators of research performance in the social sciences and humanities. An exploratory study in the fields of Law and Linguistics at Flemish Universities. Report of the Flemish Inter-University Council (V.L.I.R.), Brussels, Belgium /Centre for Science and Technology Studies (CWTS), Leiden University, the Netherlands / Ministry of the Flemish Community, Brussels, Belgium. V.L.I.R. Brussels, Belgium.Google Scholar
  13. Moed H.F., Thesis, Leiden University, 1989.Google Scholar
  14. Moed, H.F., De Bruin, R.E., van Leeuwen, T.N., (1995). New bibliometric tools for the assessment of national research performance: database description, overview of indicators and first applications. Scientometrics, 33, 381–422.CrossRefGoogle Scholar
  15. Moed H.F., (2002). The impact factors debate: the ISI’s uses and limits. Nature, 415, 731–732.Google Scholar
  16. Narin, F., (1976). Evaluative bibliometrics. The use of publications and citation analysis in the evaluation of scientific activity. Computer Horizons, Inc., Cherry Hill, New Jersey, USA.Google Scholar
  17. Nederhof, A.J. (1988) The validity and reliability of evaluation of scholarly performance. In A.F.J. van Raan (Ed.), Handbook of Quantitative Studies (pp. 193–228). Amsterdam: North-Holland.Google Scholar
  18. NOWT, Science and Technology Indicators report 1994 (in Dutch).Google Scholar
  19. NOWT, Science and Technology Indicators report 1996 (in Dutch).Google Scholar
  20. NOWT, Science and Technology Indicators report 1998 (in Dutch).Google Scholar
  21. NOWT, Science and Technology Indicators report 2000 (in Dutch).Google Scholar
  22. NOWT, Science and Technology Indicators report 2003 (in Dutch).Google Scholar
  23. OST, Science & Technologie Indicateurs — Edition 1992, Economica, Paris, 286 pp.Google Scholar
  24. OST, Science & Technologie Indicateurs — Edition 1994, Economica, Paris, 425 pp.Google Scholar
  25. OST, Science & Technologie Indicateurs — Edition 1996, Economica, Paris, 473 pp.Google Scholar
  26. OST, Science & Technologie Indicateurs — Edition 1998, Economica, Paris, 551 pp.Google Scholar
  27. OST, Science & Technologie Indicateurs — Edition 2000, Economica, Paris, 512 pp.Google Scholar
  28. OST, Science & Technologie Indicateurs — Edition 2002, Economica, Paris, 467 pp.Google Scholar
  29. PREST (2000). Impact of the Research Assessment Exercise and the Future Quality Assurance in the light of changes in the research landscape. Report to the Higher Education Funding Council for England (HEFCE), by PREST, University of Manchester, United Kingdom.Google Scholar
  30. RAE 2001. Guide to the 2001 Research Assessment Exercise.Google Scholar
  31. Van Raan, A.F.J. (1996). Advanced bibliometric methods as quantitative core of peer review based evaluation and foresight studies. Scientometrics, 36, 397–420.CrossRefGoogle Scholar
  32. Rempen, A., (1998). Leserbrief zur Haller, U., H. Hepp and E. Reinhold Tötet der ‘Impact factor’ die deutsche Sprache? Gynakologisch-geburtshilfliche Rundschau, 38, 54–54.Google Scholar
  33. Saiz-Salinas, J.I., (1996). Failed professor. Nature, 381, 186.CrossRefGoogle Scholar
  34. Tijssen, R.J.W., van Leeuwen T.N., van Raan, A.F.J. (2002). Mapping the scientific performance of German medical research. An international comparative bibliometric study. Report to the German Federal Ministry of Education and Research (BMBF).Google Scholar
  35. VSNU, Standard Evaluation Protocol (1998) for Public Research organisations, Utrecht, 1998.Google Scholar
  36. VSNU, NWO, and KNAW, Standard Evaluation Protocol (2003–2009) for Public Research organisations, January 2003.Google Scholar
  37. Warner, J. (2000a). A critical review of the application of citation studies to the Research Assessment Exercises, Journal of Information Science, 26, 453–460.Google Scholar
  38. Warner, J. (2000b). Research assessment and Citation Analysis, The Scientist, 14, October 30.Google Scholar

Copyright information

© Kluwer Academic Publishers 2004

Authors and Affiliations

  • Thed van Leeuwen
    • 1
  1. 1.Centre for Science and Technology Studies (CWTS)Leiden Universitythe Netherlands

Personalised recommendations