The Pinski–Narin influence weight and the Ramanujacharyulu power-weakness ratio indicators revisited
- 58 Downloads
A graph theoretic approach from social network analysis allows size-dependent and size-independent bibliometric indicators to be identified from what is called the citation matrix. In an input–output sense, the number of references becomes the size-dependent measure of the input and the number of citations received by the journal from all journals in the network becomes the size-dependent measure of the output. However, in this paper, we are interested to compare two size-independent dimensionless indicators: the Pinski–Narin influence weight (IW) and the Ramanujacharyulu power-weakness ratio (PWR). These are proxies for the quality of the journal’s performance in the network. We show that at the non-recursive level, the two indicators are identical. At this stage these are simply measures of popularity. After recursion (i.e. repeated improvement or iteration) these become network measures of prestige of the journals. PWR is computed as a ratio of terms in the weighted citations vector and weighted references vector after the power and weakness matrices are separately recursively iterated. The Pinski–Narin procedure computes a matrix of ratios first and evaluates the IW after recursive iteration of this matrix of ratios. In this sense, the two procedures differ just like the RoA (ratio of averages) and the AoR (average of ratios) ways of computing relative citation indicators. We illustrate the concepts using datasets from subgraphs of 10 statistical journals and 14 Chinese chemistry journals with network data collected from the Web of Science. We are also able to show the confounding effects when self-citations are taken into account.
KeywordsBibliometrics Social network analysis Journal performance metrics Indicators Quality Quantity Power-weakness ratio Influence weight
- Brin, S., & Page, L. (2001). The anatomy of a large-scale hypertextual web search engine. In Proceedings of the seventh international conference of the World Wide Web (WWW1998) (pp. 107–117). available at http://infolab.stanford.edu/~backrub/google.html. Accessed 18 Aug 2014.
- De Visscher, A. (2010). An index to measure a scientist’s specific impact. Journal of the American Society for Information Science and Technology, 61(2), 310–318.Google Scholar
- Glänzel, W., & Schubert, A. (2018). Relative indicators revisited. ISSI Newsletter, 14(2), 46–50.Google Scholar
- Kessler, M. M. (1964). Some statistical properties of citations in the literature of physics (Ed.M. E. STEVENS). In Statistical Association Methods in Mechanized Documentation (Symposium Proceedings, 1964). Washington Nat. Bur.Std. 193–198 (1965).Google Scholar
- Leydesdorff, L., & Opthof, T. (2018). Revisiting relative indicators and provisional truths. ISSI Newsletter, 14(3), 62–67.Google Scholar
- West, J. D., & Vilhena, D. A. (2014). A network approach to scholarly evaluation. In B. Cronin & C. R. Sugimoto (Eds.), Beyond bibliometrics (pp. 151–165). Cambridge: MIT Press.Google Scholar
- Yan, E., & Ding, Y. (2010). Weighted citation: An indicator of an article’s prestige. Journal of the American Society for Information Science and Technology, 61(8), 1635–1643.Google Scholar