The concept of h-index has been proposed to easily assess a researcher’s performance with a single number. However, by using only this number, we lose significant information about the distribution of citations per article in an author’s publication list. In this article, we study an author’s citation curve and we define two new areas related to this curve. We call these “penalty areas”, since the greater they are, the more an author’s performance is penalized. We exploit these areas to establish new indices, namely Perfectionism Index and eXtreme Perfectionism Index (XPI), aiming at categorizing researchers in two distinct categories: “influentials” and “mass producers”; the former category produces articles which are (almost all) with high impact, and the latter category produces a lot of articles with moderate or no impact at all. Using data from Microsoft Academic Service, we evaluate the merits mainly of PI as a useful tool for scientometric studies. We establish its effectiveness into separating the scientists into influentials and mass producers; we demonstrate its robustness against self-citations, and its uncorrelation to traditional indices. Finally, we apply PI to rank prominent scientists in the areas of databases, networks and multimedia, exhibiting the strength of the index in fulfilling its design goal.
Ranking h-Index Citation analysis Bibliometrics
This is a preview of subscription content, log in to check access.
The authors wish to thank Professor Sofia Kouidou, Vice-rector of the Aristotle University of Thessaloniki, for stating the basic question that led to the present research.
The authors would also wish to thank Professor Vana Doufexi for reviewing and editing the final release of this article.
The offer of Microsoft to provide gratis their database API is appreciated.
Finally, D.Katsaros acknowledges the support of the Research Committee of the University of Thessaly through the project “Web observatory for research activities in the University of Thessaly”.
Alonso, S., Cabrerizo, F. J., Herrera-Viedma, E., & Herrera, F. (2009). h-index: A review focused in its variants, computation and standardization for different scientific fields. Journal of Informetrics, 3(4), 273–289.CrossRefGoogle Scholar
Anderson, T. R., Hankin, R. K. S., & Killworth, P. D. (2008). Beyond the Durfee square: Enhancing the h-index to score total publication output. Scientometrics, 76, 577–588.CrossRefGoogle Scholar
Basaras, P., Katsaros, D., & Tassiulas, L. (2013). Detecting influential spreaders in complex, dynamic networks. IEEE Computer magazine, 46(4), 26–31.CrossRefGoogle Scholar
Feist, G. J. (1997). Quantity, quality, and depth of research as influences on scientific eminence: Is quantity most important? Creativity Research Journal, 10, 325–335.CrossRefGoogle Scholar
Franceschini, F., & Maisano, D. (2010). The citation triad: An overview of a scientist’s publication output based on Ferrers diagrams. Journal of Informetrics, 4(4), 503–511.CrossRefGoogle Scholar
García-Pérez, M. A. (2012). An extension of the \(h\)-index that covers the tail and the top of the citation curve and allows ranking researchers with similar \(h\). Journal of Informetrics, 6(4), 689–699.CrossRefGoogle Scholar
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16,569–16,572.CrossRefGoogle Scholar
Hirsch, J. E. (2007). Does the h index have predictive power? Proceedings of the National Academy of Sciences, 104(49), 9,193–19,198.CrossRefGoogle Scholar
Hirsch, J. E. (2010). An index to quantify an individual’s scientific research output that takes into account the effect of multiple coauthorship. Scientometrics, 85(3), 741–754.CrossRefGoogle Scholar
Katsaros, D., Akritidis, L., & Bozanis, P. (2009). The \(f\) index: Quantifying the impact of coterminal citations on scientists’ ranking. Journal of the American Society for Information Science and Technology, 60(5), 1051–1056.CrossRefGoogle Scholar
Kuan, C. H., Huang, H. H., & Chen, D. Z. (2011). Positioning research and innovation performance using shape centroids of h-core and h-tail. Journal of Informetrics, 5(4), 515–528.CrossRefGoogle Scholar
Liu, J. Q., Rousseau, R., Wang, M. S., & Ye, F. Y. (2013). Ratios of h-cores, h-tails and uncited sources in sets of scientific papers and technical patents. Journal of Informetrics, 7(1), 190–197.CrossRefGoogle Scholar
Rosenberg, M. S. (2011). A biologist’s guide to impact factors. Arizona: Tech. rep., Arizona State University.Google Scholar
Rousseau, R. (2006). New developments related to the Hirsch index. Science Focus, 1(4), 23–25.Google Scholar
Schreiber, M. (2007). Self-citation corrections for the Hirsch index. Europhysics Letters, 78(3), 30002.CrossRefGoogle Scholar
Sidiropoulos, A., Katsaros, D., & Manolopoulos, D. (2007). Generalized Hirsch \(h\)-index for disclosing latent facts in citation networks. Scientometrics, 72(2), 253–280.CrossRefGoogle Scholar
Vinkler, P. (2009). The \(\pi \)-index: A new indicator for assessing scientific impact. Journal of Information Science, 35(5), 602–612.CrossRefGoogle Scholar
Vinkler, P. (2011). Application of the distribution of citations among publications in scientometric evaluations. Journal of the American Society for Information Science and Technology, 62(10), 1963–1978.CrossRefGoogle Scholar