Advertisement

Scientometrics

, Volume 106, Issue 1, pp 51–65 | Cite as

Comprehensive indicator comparisons intelligible to non-experts: the case of two SNIP versions

  • Henk F. Moed
Article

Abstract

A framework is proposed for comparing different types of bibliometric indicators, introducing the notion of an Indicator Comparison Report. It provides a comprehensive overview of the main differences and similarities of indicators. The comparison shows both the strong points and the limitations of each of the indicators at stake, rather than over-promoting one indicator and ignoring the benefits of alternative constructs. It focuses on base notions, assumptions, and application contexts, which makes it more intelligible to non-experts. As an illustration, a comparison report is presented for the original and the modified source normalized impact per paper indicator of journal citation impact (SNIP).

Keywords

Bibliometric indicators Indicator development Journal impact factor SNIP Source normalization Relative indicators Subject field bias Statistical validity 

Notes

Acknowledgments

The author wishes to thank two anonymous referees for their valuable comments on an earlier version of this paper, and also the participants to the Workshop on Bibliometric Indicators, held at Leiden University, and organized by the Center for Science and Technology Studies (CWTS) on September 2, 2014, especially Dr. Ludo Waltman, for their feedback.

References

  1. Bornmann, L., Marx, W., & Schier, H. (2009). Hirsch-type index values for organic chemistry journals: A comparison of new metrics with the journal impact factor. European Journal of Organic Chemistry, 2009, 1471–1476.CrossRefGoogle Scholar
  2. Campanario, J. M. (2011). Empirical study of journal impact factors obtained using the classical two-year citation window versus a five-year citation window. Scientometrics, 87, 189–204.CrossRefGoogle Scholar
  3. Falagas, M. E., Kouranos, V. D., Arencibia-Jorge, R., & Karageorgopoulos, D. E. (2008). Comparison of SCImago journal rank indicator with journal impact factor. The FASEB Journal, 22, 2623–2628.CrossRefGoogle Scholar
  4. Fersht, A. (2009). The most influential journals: Impact factor and Eigenfactor. Proceedings of the National Academy of Sciences, 106, 6883–6884.CrossRefGoogle Scholar
  5. Garfield, E. (1979). Citation Indexing. Its theory and application in science, technology and humanities. New York: Wiley.Google Scholar
  6. Garfield, E. (1996). How can impact factors be improved? British Medical Journal, 313, 411–413.CrossRefGoogle Scholar
  7. Glanzel, W. (2009). The multi-dimensionality of journal impact. Scientometrics, 78, 355–374.CrossRefGoogle Scholar
  8. Glänzel, W. G., & Moed, H. F. (2013). Opinion paper: Thoughts and facts on bibliometric indicators. Scientometrics, 96, 381–394.CrossRefGoogle Scholar
  9. Glänzel, W., Schubert, A., Thijs, B., & Debackere, K. (2011). A priori vs. a posteriori normalisation of citation indicators. The case of journal ranking. Scientometrics, 87, 415–424.CrossRefGoogle Scholar
  10. González-Pereira, B., Guerrero-Bote, V. P., & Moya-Anegón, F. (2010). A new approach to the metric of journals’ scientific prestige: The SJR indicator. Journal of informetrics, 4(3), 379–391.CrossRefGoogle Scholar
  11. Harzing, A.-W., & Van der Wal, R. (2009). A Google Scholar h-index for journals: An alternative metric to measure journal impact in economics and business? Journal of the American Society for Information Science and Technology, 60, 41–46.CrossRefGoogle Scholar
  12. Leydesdorff, L., & Bornmann, L. (2011). How fractional counting of citations affects the impact factor: Normalization in terms of differences in citation potentials among fields of science. Journal of the American Society of Information Science and Technology, 62, 217–229.CrossRefGoogle Scholar
  13. Leydesdorff, L., & Opthof, T. (2010). Scopus source-normalized impact factor (SNIP) versus a journal impact factor based on fractional counting of citations. Journal of the American Society of Information Science and Technology, 61, 2365–2369.CrossRefGoogle Scholar
  14. Mingers, J. (2014). Problems with the SNIP indicator. Journal of Informetrics, 8, 890–894.CrossRefGoogle Scholar
  15. Moed, H. F. (2005). Citation analysis in research evaluation. Dordrecht: Springer. ISBN 1-4020-3713-9.Google Scholar
  16. Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4, 265–277.CrossRefGoogle Scholar
  17. Moed, H. F. (2011). The source normalized impact per paper is a valid and sophisticated indicator of journal citation impact (Letter to the Editor). Journal of the American Society for Information Science and Technology, 62, 211–213.CrossRefGoogle Scholar
  18. Moed, H. F., & van Leeuwen, Th N. (1995). Improving the accuracy of Institute for Scientific Information’s journal impact factors. Journal of the American Society for Information Science, 46, 461–467.CrossRefGoogle Scholar
  19. Pislyakov, V. (2009). Comparing two “thermometers”: Impact factors of 20 leading economic journals according to Journal Citation Reports and Scopus. Scientometrics, 79, 541–550.CrossRefGoogle Scholar
  20. Scopus. (2014). Scopus increases interoperability with SciVal and introduces new journal metric. http://blog.scopus.com/posts/scopus-increases-interoperability-with-scival-and-introduces-new-journal-metric.
  21. Seglen, P. O. (1994). Causal relationship between article citedness and journal impact. Journal of the American Society for Information Science, 45, 1–11.CrossRefGoogle Scholar
  22. Small, H. G., & Sweeney, E. (1985). Clustering the science citation index using co-citations. A comparison of methods. Scientometrics, 7, 391–409.CrossRefGoogle Scholar
  23. Waltman, L.(2015). Private communication.Google Scholar
  24. Waltman, L., Van Eck, N. J., Van Leeuwen, Th N, & Visser, M. S. (2013). Some modifications to the SNIP journal impact indicator. Journal of Informetrics, 7, 272–285.CrossRefGoogle Scholar
  25. Yin, C.-Y., Aris, M., & Chen, X. (2010). Combination of Eigenfactor and h-index to evaluate scientific journals. Scientometrics, 84, 639–648.CrossRefGoogle Scholar
  26. Zhou, P., & Leydesdorff, L. (2011). Fractional counting of citations in research evaluation: A cross- and interdisciplinary assessment of the Tsinghua University in Beijing. Journal of Informetrics, 5, 360–368.CrossRefGoogle Scholar
  27. Zitt, M. (2011). Behind citing-side normalization of citations: Some properties of the journal impact factor. Scientometrics, 89, 329–344.CrossRefGoogle Scholar
  28. Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: The audience factor. Journal of the American Society for Information Science and Technology, 59, 1856–1860.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2015

Authors and Affiliations

  1. 1.Department of Computer, Control and Management Engineering A. RubertiSapienza University of RomeRomeItaly

Personalised recommendations