Scientometrics

, Volume 115, Issue 2, pp 1107–1113 | Cite as

Multiple versions of the h-index: cautionary use for formal academic purposes

Article

Abstract

The h-index, or Hirsch index, named after Jorge E. Hirsch, is one of the few author-based metrics currently available that offers a perspective of the productivity and citation impact of a scientist, researcher, or academic. There are four tools most commonly used to calculate the h-index, all of which depend on separate databases: Scopus, Web of Knowledge, Google Scholar, and ResearchGate. Using the h-index of the authors of this paper derived from these four sources, it is abundantly clear that scores vary widely and that it is unclear which of these sources is a reliable or accurate source of information, for any purpose. As the use and application of author-based metrics increases, including for official academic purposes, it is becoming increasingly important to know which source of the h-index is most accurate, and thus valid. Although this is not a review of the h-index, some perspectives are provided of the h-index-related literature to place this case study within a wider context of the weaknesses and criticisms of using the h-index as a metric to evaluate scientific outcome.

Keywords

Accuracy Author-based metrics Creditability Databases Google Scholar Scopus Web of Science 

Abbreviations

ABM

Author-based metric

GS

Google Scholar

h-index

Hirsch index

SRA

Scientist, researcher, or academic

WoS

Web of Science

Notes

Compliance with ethical standards

Conflict of interest

The authors declared that they have no conflict of interest.

References

  1. Adriaansee, S. L., & Rensleigh, C. (2013). Web of Science, Scopus and Google Scholar: A content comprehensiveness comparison. The Electronic Library, 31(6), 727–744.  https://doi.org/10.1108/EL-12-2011-0174.CrossRefGoogle Scholar
  2. Bar-Ilan, J. (2008). Which h-index?—A comparison of WoS, Scopus and Google Scholar. Scientometrics, 74(2), 257–271.  https://doi.org/10.1007/s11192-008-0216-y.CrossRefGoogle Scholar
  3. Bornmann, L., & Daniel, H.-D. (2009). The state of h index research. Is the h index the ideal way to measure research performance? EMBO Reports, 10(1), 2–6.  https://doi.org/10.1038/embor.2008.233.CrossRefGoogle Scholar
  4. Costas, R., & Bordons, M. (2007). The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level. Journal of Infometrics, 1, 193–203.  https://doi.org/10.1016/j.joi.2007.02.001.CrossRefGoogle Scholar
  5. Flatt, J. W., Blassime, A., & Vayena, E. (2017). Improving the measurement of scientific success by reporting a self-citation index. Publications, 5, 20.  https://doi.org/10.3390/publications5030020.CrossRefGoogle Scholar
  6. Glänzel, W., & Persson, O. (2005). H-index for price medallists. ISSI Newsletter, 1(4), 15–18.Google Scholar
  7. Halevi, G., Moed, H., & Bar-Ilan, J. (2017). Suitability of Google Scholar as a source of scientific information and as a source of data for scientific evaluation—Review of the literature. Journal of Infometrics, 11(3), 823–834.  https://doi.org/10.1016/j.joi.2017.06.005.CrossRefGoogle Scholar
  8. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences USA, 102(46), 16569–16572.  https://doi.org/10.1073/pnas.0507655102.CrossRefMATHGoogle Scholar
  9. Lippi, G., & Mattiuzzi, C. (2017). Scientist impact factor (SIF): A new metric for improving scientists’ evaluation? Annals of Translational Medicine, 5(15), 303.  https://doi.org/10.21037/atm.2017.06.24.CrossRefGoogle Scholar
  10. Popova, O., Romanov, D., Drozdov, A., & Gerashchenko, A. (2017). Citation-based criteria of the significance of the research activity of scientific teams. Scientometrics, 112(3), 1179–1202.  https://doi.org/10.1007/s11192-017-2427-6.CrossRefGoogle Scholar
  11. Saraykar, S., Saleh, A., & Selek, S. (2017). The association between NIMH funding and h-index in psychiatry. Academic Psychiatry, 41, 455–459.  https://doi.org/10.1007/s40596-016-0654-4.CrossRefGoogle Scholar
  12. Svider, P. F., Husain, Q., Folbe, A. J., Couldwell, W. T., Liu, J. K., & Eloy, J. A. (2014). Assessing national institutes of health funding and scholarly impact in neurological surgery. Journal of Neurosurgery, 120(1), 191–196.  https://doi.org/10.3171/2013.8.JNS13938.CrossRefGoogle Scholar
  13. Teixeira da Silva, J. A. (2013). The global science factor v. 1.1: A new system for measuring and quantifying quality in science. The Asian and Australasian Journal of Plant Science and Biotechnology, 7(Special Issue 1), 92–101.Google Scholar
  14. Teixeira da Silva, J. A. (2016). Science watchdogs. Academic Journal of Interdisciplinary Studies, 5(3), 13–15.  https://doi.org/10.5901/ajis.2016.v5n3p13.Google Scholar
  15. Teixeira da Silva, J. A. (2017). The journal impact factor (JIF): Science publishing’s miscalculating metric. Academic Questions, 30(4), 433–441.CrossRefGoogle Scholar
  16. Teixeira da Silva, J. A., & Bernès, S. (2017). Clarivate analytics: Continued omnia vanitas impact factor culture. Science and Engineering Ethics.  https://doi.org/10.1007/s11948-017-9873-7. (in press).Google Scholar
  17. Teixeira da Silva, J. A., & Bornemann-Cimenti, H. (2017). Why do some retracted papers continue to be cited? Scientometrics, 110(1), 365–370.  https://doi.org/10.1007/s11192-016-2178-9.CrossRefGoogle Scholar
  18. Teixeira da Silva, J. A., & Dobránszki, J. (2017). Highly cited retracted papers. Scientometrics, 110(3), 1653–1661.  https://doi.org/10.1007/s11192-016-2227-4.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2018

Authors and Affiliations

  1. 1.Miki-cho, TakamatsuJapan
  2. 2.Research Institute of Nyíregyháza, IAREFUniversity of DebrecenNyíregyházaHungary

Personalised recommendations