Advertisement

The influence of highly cited papers on field normalised indicators

  • Mike ThelwallEmail author
Article
  • 143 Downloads

Abstract

Field normalised average citation indicators are widely used to compare countries, universities and research groups. The most common variant, the Mean Normalised Citation Score (MNCS), is known to be sensitive to individual highly cited articles but the extent to which this is true for a log-based alternative, the Mean Normalised Log Citation Score (MNLCS), is unknown. This article investigates country-level highly cited outliers for MNLCS and MNCS for all Scopus articles from 2013 and 2012. The results show that MNLCS is influenced by outliers, as measured by kurtosis, but at a much lower level than MNCS. The largest outliers were affected by the journal classifications, with the Science-Metrix scheme producing much weaker outliers than the internal Scopus scheme. The high Scopus outliers were mainly due to uncitable articles reducing the average in some humanities categories. Although outliers have a numerically small influence on the outcome for individual countries, changing indicator or classification scheme influences the results enough to affect policy conclusions drawn from them. Future field normalised calculations should therefore explicitly address the influence of outliers in their methods and reporting.

Keywords

Highly cited papers Citation outliers Field normalised indicators MNCS MNLCS 

References

  1. Aksnes, D. W. (2003). Characteristics of highly cited papers. Research Evaluation, 12(3), 159–170.CrossRefGoogle Scholar
  2. Aksnes, D. W., & Sivertsen, G. (2004). The effect of highly cited papers on national citation indicators. Scientometrics, 59(2), 213–224.CrossRefGoogle Scholar
  3. Archambault, É., Beauchesne, O. H., & Caruso, J. (2011). Towards a multilingual, comprehensive and open scientific journal ontology. In Proceedings of the 13th international conference of the international society for scientometrics and informetrics (pp. 66–77). Durban.Google Scholar
  4. Bornmann, L., Wagner, C., & Leydesdorff, L. (2015). BRICS countries and scientific excellence: A bibliometric analysis of most frequently cited papers. Journal of the Association for Information Science and Technology, 66(7), 1507–1513.CrossRefGoogle Scholar
  5. Clauset, A., Shalizi, C. R., & Newman, M. E. (2009). Power-law distributions in empirical data. SIAM Review, 51(4), 661–703.MathSciNetCrossRefzbMATHGoogle Scholar
  6. Donner, P. (2017). Document type assignment accuracy in the journal citation index data of Web of Science. Scientometrics, 113(1), 219–236.CrossRefGoogle Scholar
  7. Falagas, M. E., Pitsouni, E. I., Malietzis, G. A., & Pappas, G. (2008). Comparison of PubMed, Scopus, web of science, and Google scholar: Strengths and weaknesses. The FASEB Journal, 22(2), 338–342.CrossRefGoogle Scholar
  8. Glänzel, W., & Schubert, A. (2003). A new classification scheme of science fields and subfields designed for scientometric evaluation purposes. Scientometrics, 56(3), 357–367.CrossRefGoogle Scholar
  9. Ioannidis, J. P., & Panagiotou, O. A. (2011). Comparison of effect sizes associated with biomarkers reported in highly cited individual articles and in subsequent meta-analyses. JAMA, 305(21), 2200–2210.CrossRefGoogle Scholar
  10. Larivière, V., Desrochers, N., Macaluso, B., Mongeon, P., Paul-Hus, A., & Sugimoto, C. R. (2016). Contributorship and division of labor in knowledge production. Social Studies of Science, 46(3), 417–435.CrossRefGoogle Scholar
  11. Levitt, J., & Thelwall, M. (2013). Alphabetization and the skewing of first authorship towards last names early in the alphabet. Journal of Informetrics, 7(3), 575–582.CrossRefGoogle Scholar
  12. MacRoberts, M. H., & MacRoberts, B. R. (2010). Problems of citation analysis: A study of uncited and seldom-cited influences. Journal of the American Society for Information Science and Technology, 61(1), 1–12.CrossRefGoogle Scholar
  13. Martín-Martín, A., Orduna-Malea, E., & López-Cózar, E. D. (2018). Coverage of highly-cited documents in Google Scholar, Web of Science, and Scopus: A multidisciplinary comparison. Scientometrics, 116(3), 2175–2188.CrossRefGoogle Scholar
  14. McCain, K. W. (2011). Eponymy and obliteration by incorporation: The case of the “Nash Equilibrium”. Journal of the American Society for Information Science and Technology, 62(7), 1412–1424.CrossRefGoogle Scholar
  15. Merton, R. K. (1968). The Matthew effect in science: The reward and communication systems of science are considered. Science, 159(3810), 56–63.CrossRefGoogle Scholar
  16. Oppenheim, C., & Renn, S. P. (1978). Highly cited old papers and the reasons why they continue to be cited. Journal of the American Society for Information Science and Technology, 29(5), 225–231.CrossRefGoogle Scholar
  17. Perianes-Rodriguez, A., & Ruiz-Castillo, J. (2017). A comparison of the Web of Science and publication-level classification systems of science. Journal of Informetrics, 11(1), 32–45.CrossRefGoogle Scholar
  18. Perianes-Rodriguez, A., & Ruiz-Castillo, J. (2018). The impact of classification systems in the evaluation of the research performance of the Leiden Ranking Universities. Journal of the Association for Information Science and Technology, 69(8), 1046–1053.CrossRefGoogle Scholar
  19. Persson, O. (2009). Are highly cited papers more international? Scientometrics, 83(2), 397–401.CrossRefGoogle Scholar
  20. Price, D. de Solla (1976). A general theory of bibliometric and other cumulative advantage processes. Journal of the American Society for Information Science and Technology, 27(5), 292–306.CrossRefGoogle Scholar
  21. Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2010). Altmetrics: A manifesto. http://altmetrics.org/manifesto/.
  22. Ruiz-Castillo, J., & Waltman, L. (2015). Field-normalized citation impact indicators using algorithmically constructed classification systems of science. Journal of Informetrics, 9(1), 102–117.CrossRefGoogle Scholar
  23. Thelwall, M. (2016a). Are the discretised lognormal and hooked power law distributions plausible for citation data? Journal of Informetrics, 10(2), 454–470.CrossRefGoogle Scholar
  24. Thelwall, M. (2016b). Are there too many uncited articles? Zero inflated variants of the discretised lognormal and hooked power law distributions. Journal of Informetrics, 10(2), 622–633.CrossRefGoogle Scholar
  25. Thelwall, M. (2017a). Web indicators for research evaluation: A practical guide. San Rafael, CA: Morgan & Claypool.Google Scholar
  26. Thelwall, M. (2017b). Three practical field normalised alternative indicator formulae for research evaluation. Journal of Informetrics, 11(1), 128–151.CrossRefGoogle Scholar
  27. Thelwall, M. (2018). Do females create higher impact research? Scopus citations and Mendeley readers for articles from five countries. Journal of Informetrics, 12(4), 1031–1041.CrossRefGoogle Scholar
  28. Tijssen, R. J., Visser, M. S., & Van Leeuwen, T. N. (2002). Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference? Scientometrics, 54(3), 381–397.CrossRefGoogle Scholar
  29. Waltman, L. (2016). A review of the literature on citation impact indicators. Journal of Informetrics, 10(2), 365–391.CrossRefGoogle Scholar
  30. Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E. C., Tijssen, R. J., van Eck, N. J., et al. (2012). The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation. Journal of the American Society for Information Science and Technology, 63(12), 2419–2432.CrossRefGoogle Scholar
  31. Waltman, L., & Schreiber, M. (2013). On the calculation of percentile-based bibliometric indicators. Journal of the American Society for Information Science and Technology, 64(2), 372–379.CrossRefGoogle Scholar
  32. Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. (2011a). Towards a new crown indicator: Some theoretical considerations. Journal of Informetrics, 5(1), 37–47.CrossRefGoogle Scholar
  33. Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. (2011b). Towards a new crown indicator: An empirical analysis. Scientometrics, 87(3), 467–481.CrossRefGoogle Scholar
  34. Wang, J. (2013). Citation time window choice for research impact evaluation. Scientometrics, 94(3), 851–872.CrossRefGoogle Scholar
  35. Wang, Q., & Waltman, L. (2016). Large-scale analysis of the accuracy of the journal classification systems of Web of Science and Scopus. Journal of Informetrics, 10(2), 347–364.CrossRefGoogle Scholar
  36. Westfall, P. H. (2014). Kurtosis as peakedness, 1905–2014. RIP. The American Statistician, 68(3), 191–195.MathSciNetCrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2019

Authors and Affiliations

  1. 1.University of WolverhamptonWolverhamptonUK

Personalised recommendations