Advertisement

Scientometrics

, Volume 119, Issue 2, pp 1173–1185 | Cite as

The Pinski–Narin influence weight and the Ramanujacharyulu power-weakness ratio indicators revisited

  • Gangan PrathapEmail author
Article
  • 58 Downloads

Abstract

A graph theoretic approach from social network analysis allows size-dependent and size-independent bibliometric indicators to be identified from what is called the citation matrix. In an input–output sense, the number of references becomes the size-dependent measure of the input and the number of citations received by the journal from all journals in the network becomes the size-dependent measure of the output. However, in this paper, we are interested to compare two size-independent dimensionless indicators: the Pinski–Narin influence weight (IW) and the Ramanujacharyulu power-weakness ratio (PWR). These are proxies for the quality of the journal’s performance in the network. We show that at the non-recursive level, the two indicators are identical. At this stage these are simply measures of popularity. After recursion (i.e. repeated improvement or iteration) these become network measures of prestige of the journals. PWR is computed as a ratio of terms in the weighted citations vector and weighted references vector after the power and weakness matrices are separately recursively iterated. The Pinski–Narin procedure computes a matrix of ratios first and evaluates the IW after recursive iteration of this matrix of ratios. In this sense, the two procedures differ just like the RoA (ratio of averages) and the AoR (average of ratios) ways of computing relative citation indicators. We illustrate the concepts using datasets from subgraphs of 10 statistical journals and 14 Chinese chemistry journals with network data collected from the Web of Science. We are also able to show the confounding effects when self-citations are taken into account.

Keywords

Bibliometrics Social network analysis Journal performance metrics Indicators Quality Quantity Power-weakness ratio Influence weight 

References

  1. Bensman, S. J. (2007). Garfield and the impact factor. Annual Review of Information Science and Technology, 41, 93–155.CrossRefGoogle Scholar
  2. Bergstrom, C. (2007). Eigenfactor: Measuring the value and prestige of scholarly journals. College and Research Libraries News, 68, 314.CrossRefGoogle Scholar
  3. Bollen, J., Rodriguez, M. A., & Van de Sompel, H. (2006). Journal status. Scientometrics, 69(3), 669–687.CrossRefGoogle Scholar
  4. Bollen, J., Van de Sompel, H., Hagberg, A., & Chute, R. (2009). A principal component analysis of 39 scientific impact measures. PLoS ONE, 4(6), e6022.  https://doi.org/10.1371/journal.pone.0006022.CrossRefGoogle Scholar
  5. Brin, S., & Page, L. (2001). The anatomy of a large-scale hypertextual web search engine. In Proceedings of the seventh international conference of the World Wide Web (WWW1998) (pp. 107–117). available at http://infolab.stanford.edu/~backrub/google.html. Accessed 18 Aug 2014.
  6. Cason, H., & Lubotsky, M. (1936). The influence and dependence of psychological journals on each other. Psychological Bulletin, 33(2), 95–103.CrossRefGoogle Scholar
  7. Daniel, R. S., & Louttit, C. M. (1953). Professional problems in psychology. Englewood Cliffs: Prentice-Hall Inc.  https://doi.org/10.1037/11233-000.CrossRefGoogle Scholar
  8. David, H. A. (1971). Ranking the players in a round robin tournament. Review of the International Statistical Institute, 39, 137–147.CrossRefzbMATHGoogle Scholar
  9. De Visscher, A. (2010). An index to measure a scientist’s specific impact. Journal of the American Society for Information Science and Technology, 61(2), 310–318.Google Scholar
  10. De Visscher, A. (2011). What does the g-index really measure? Journal of the American Society for Information Science and Technology, 62(11), 2290–2293.CrossRefGoogle Scholar
  11. De Visscher, A. (2012). The thermodynamics-bibliometrics consilience and the meaning of h-type indices—Reply. Journal of the American Society for Information Science and Technology, 63(3), 630–631.CrossRefGoogle Scholar
  12. Garfield, E. (1972). Citation analysis as a tool in journal evaluation. Science, 178(4060), 471–479.CrossRefGoogle Scholar
  13. Garfield, E., & Sher, I. H. (1963). New factors in the evaluation of scientific literature through citation indexing. American Documentation, 14, 195–201.CrossRefGoogle Scholar
  14. Glänzel, W., & Schubert, A. (2018). Relative indicators revisited. ISSI Newsletter, 14(2), 46–50.Google Scholar
  15. Kendall, M. G. (1955). Further contributions to the theory of paired comparisons. Biometrics, 11(1), 43–62.MathSciNetCrossRefGoogle Scholar
  16. Kessler, M. M. (1964). Some statistical properties of citations in the literature of physics (Ed.M. E. STEVENS). In Statistical Association Methods in Mechanized Documentation (Symposium Proceedings, 1964). Washington Nat. Bur.Std. 193–198 (1965).Google Scholar
  17. Larivère, V., & Gingras, Y. (2011). Averages of ratios vs. ratios of averages: An empirical analysis of four levels of aggregation. Journal of Informetrics, 5(3), 392–399.CrossRefGoogle Scholar
  18. Leydesdorff, L. (2009). How are new citation-based journal indicators adding to the bibliometric toolbox? Journal of the American Society for Information Science and Technology, 60(7), 1327–1336.CrossRefGoogle Scholar
  19. Leydesdorff, L., de Nooy, W., & Bornmann, L. (2016). The power-weakness ratios (PWR) as a journal indicator: Testing the “tournaments” metaphor in citation impact studies. Journal of Data and Information Science, 1(3), 6–26.  https://doi.org/10.20309/jdis.201617.CrossRefGoogle Scholar
  20. Leydesdorff, L., & Opthof, T. (2018). Revisiting relative indicators and provisional truths. ISSI Newsletter, 14(3), 62–67.Google Scholar
  21. Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of physics. Information Processing and Management, 12(5), 297–312.CrossRefGoogle Scholar
  22. Prathap, G. (2018). Eugene Garfield: From the metrics of science to the science of metrics. Scientometrics, 114(2), 637–650.CrossRefGoogle Scholar
  23. Prathap, G., & Nishy, P. (2016). An alternative size-independent journal performance indicator for science on the periphery. Current Science, 111(11), 1802–1810.CrossRefGoogle Scholar
  24. Prathap, G., Nishy, P., & Savithri, S. (2016). On the orthogonality of indicators of journal performance. Current Science, 111(5), 876–881.CrossRefGoogle Scholar
  25. Ramanujacharyulu, C. (1964). Analysis of preferential experiments. Psychometrika, 29(3), 257–261.CrossRefzbMATHGoogle Scholar
  26. Todeschini, R., Grisoni, F., & Nembri, S. (2015). Weighted power–weakness ratio for multi-criteria decision making. Chemometrics and Intelligent Laboratory Systems, 146, 329–336.CrossRefGoogle Scholar
  27. West, J. D., Bergstrom, T. C., & Bergstrom, C. T. (2010). The eigenfactor metrics: A network approach to assessing scholarly journals. College and Research Libraries, 71(3), 236–244.CrossRefGoogle Scholar
  28. West, J. D., & Vilhena, D. A. (2014). A network approach to scholarly evaluation. In B. Cronin & C. R. Sugimoto (Eds.), Beyond bibliometrics (pp. 151–165). Cambridge: MIT Press.Google Scholar
  29. Xhignesse, L. V., & Osgood, C. E. (1967). Bibliographical citation characteristics of the psychological journal network in 1950 and in 1960. American Psychologist, 22, 778.CrossRefGoogle Scholar
  30. Yan, E., & Ding, Y. (2010). Weighted citation: An indicator of an article’s prestige. Journal of the American Society for Information Science and Technology, 61(8), 1635–1643.Google Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2019

Authors and Affiliations

  1. 1.A P J Abdul Kalam Technological UniversityThiruvananthapuramIndia

Personalised recommendations