Core journals and elite subsets in scientometrics
Abstract
The core journals in scientometrics were determined by the frequency of papers in journals in the elite publication subsets (i.e. most frequently cited publications) of Price medallists. It is supposed that scientometric impact indicators derived from elite subsets may represent the impact of total publication activity more appropriately than the indices referring to whole sets. It is assumed further that prominent scientists publish their papers of potentially high impact in the leading journals of the field. The size of the elite subsets was determined by h, π, πv, MCR, and HCPstatistics. MCR is the mean citation rate of publications in a total set, whereas HCP means here papers at least with 100 citations. According to MCR or HCP statistics those papers belong to the corresponding elite subset of which citation frequency is equal to or higher than the mean of the corresponding set or 100, resp. The combined set of papers in 11 core journals of scientometrics was analysed. The number of papers in the elite subsets and their mean citation rate was calculated. The size of the studied elite subsets ranges from 30 to 225. The mean citation rate of the journal papers in the different elite subsets was found to decrease as the size of the elite subset increased. The publications in the field of “scientometrics” were collected also by keywords: scientometric, bibliometric, informetric, and webometric, from WoS. The mean citation rate of papers in the elite subsets was found significantly higher for those published in journals covering nonscientometric topics (e.g. Nature, Science, British Medical Journal, PLOS One, etc.). The high rate of papers in the elite subsets published by Price medallists may validate the selection of these sets for evaluation purposes. In most cases, any of the studied elite subsets may be used for publication evaluation.
Keywords
Scientometric journals Highly cited papers hindex πIndex Price medallistsIntroduction
Assessing scientific publications by scientometric methods has two main aspects: impact and quantity. Scientometric assessments are performed through indicators. The indicators selected for evaluating individuals, teams, countries or journals can characterize impact or quantity or both (Vinkler 2013). The two sides of the assessment, i.e. impact and quantity cannot be separated completely. The total number of publications or the specific index of the publication productivity (publications by scientist) reflects not only the amount but also the impact of the published information. And, the total number of citations received or the specific index: citations per publication depend not only on impact but also on the amount of published information. There are some combined traditional impact and quantity indicators available in the literature (Vinkler 2010a, 2013).
For a long time, the mentioned indices were preferably calculated from data referring to whole publication sets. With the publication of the Hirsch index (Hirsch 2005) however; it has begun the preferred use of indicators referring to papers in the “elite” (or core) subsets within the total. This trend has resulted in a plethora of scientometric impact indices (Schreiber 2010; Schreiber et al. 2012; Todeschini and Baccini 2016).
The indices referring to the elite subsets are derived from a relatively small, most frequently cited part (“core” or “elite” subset) of total publications. The central idea behind the use of this method is the assumption that a relatively small part of all may represent the total publication impact. Even, the indices derived from the exclusive part of publications may characterize the relevant publication impact more appropriately than the mean indices referring to the corresponding total set.
The idea of applying core journal papers for assessment finds support also by the fact that the distribution of citations over papers is skewed (Seglen 1992). It is wellknown that scientific progress is made primarily through information in publications acknowledged by relatively high number of citations (Plomp 1990; Aksnes 2003; Vinkler 2010a, 2017a). Consequently, publications with high influence may be revealed by determining relatively highly cited journal papers.
Elite subsets can be obtained by different statistics: hstatistics, gstatistics, percentage statistics, πstatistics, πvstatistics, etc. The core or elite part of publication sets are termed as hcore, gcore, πcore, or 1%, 10%, etc. of total, according to the method applied. Naturally, the publications in the set studied are ranked according to the decreasing citation frequency by each method.
Name and calculation method for selecting elite subsets
Elite set  Name of the selection method  Calculation method 

E/1  πvmethod  P(πv) = (10 logP) − 10 
E/2  (MCR)^{2}method  P(MCR)^{2} = ΣPi(MCR)^{2} Sum of papers with citations equal to or higher than the square of mean citation rate (MCR) of papers in the studied publication set 
E/3  πmethod  Sum of papers in the πset: P(π) = √P 
E/4  MCRmethod  P(10MCR) = ΣPi(10MCR) Sum of papers with equal or higher number of citations than 10 times the mean citation rate (MCR) of papers in the studied publication set 
E/5  hmethod  Number of papers in the hcore of a publication set 
E/6  HCPmethod  Number of papers cited at least 100 times 
The publications in elite subsets in a scientific field may reveal hot topics or relevant, core information of the field depending on the time periods applied. Science is developing permanently. Therefore, the dynamic study of the publications in the elite subsets is highly relevant (Egghe 2007). Through comparing the topics of frequently cited papers in the elite sets in consecutive time periods, the development of research directions may be followed.
The size of the core sets depends on the calculation method selected, and it may bring about discrepancies during evaluations. The hstatistics, e.g. generally used may produce controversial results.
J. Hirsch, who introduced the hindex in 2005, is a university professor of physics. He published altogether 271 journal papers up to 03. 06. 2019. Until that date their papers received a total of 19,304 citations. The mean citation frequency of his papers is equal to 71.23. Taking into account all journal papers of Hirsch, independent of their topic, his hindex is: 58 (see WoS). The mentioned author initiated a revolution in the development of scientometric impact indices by suggesting the hindex.
His first scientometric paper (2005) obtained 3633 citations up to the mentioned date. Accordingly, we would think, he is one of the most influental scientists in the field. However, I have surveyed the publication list of the mentioned author in WoS. Among his 271 papers only 6 articles may be classified as scientometric publications. The citation rate of these papers is as follows: 1/(published in 2005): 3633, 8/(published in 2007): 434, 31/(published in 2010): 123, 145/(published in 2014): 25, and 249/(published in 2019): 0. (The cursive numbers are the rank numbers of the papers by citations taking into account all papers (271) of the mentioned author.) Accordingly, the hindex, i.e. the scientometric impact of J. Hirsch would be equal to 4. It is because the value of hindex cannot exceed the number of publications in the set, and it does not regard the number of citations obtained (Vinkler 2007).
However, calculating the πindex of the scientometric papers of Hirsch, a relatively high value is obtained. The number of Hirsch’s scientometric papers = 6, accordingly the number of πcore papers: √2 = 2.46, see Table 1. In calculating the πindex, we have to sum up the number of citations to πcore papers (here 2.46 rounded as 2), and the sum should be divided with 100. Accordingly: πindex = 0.01 (3633 + 434) = 40.67. This value is rather high compared to that for wellknown scientometricians (Vinkler 2017b).
The given example makes it clear that the πindex prefers scientists with relatively high number of citations to the journal papers in the πcore against the total number of citations. This is because the distribution of citations is skewed, in general (Seglen 1992), and the πmethod calculates with the citations to √P papers, where the total number of papers equals P, see Table 1. Therefore, it may happen that a PhD student shows a relatively high πindex although he or she published, say only four papers. Let us assume, e.g. that two of the papers were published together with an internationally wellknown professor. Let it be the number of citations to the individual papers as follows: 1: 400, 2: 200, 3: 10, and 4: 0. Accordingly, the size of the πcore: P(π) = √4 = 2.00. This way, the young fellow may offer a πindex = 0.01(400 + 200) = 6.0 which counts as a relatively high πindex for junior scientists (Vinkler 2009, 2017b). On the other side, the πindex may not be favourable for scientists with a relatively short publication list. This is because the size of the πset strongly depends on P.
The above examples may demonstrate, the automatic evaluation of publications of persons or teams (or similarly: journals) through the mentioned and not mentioned elite subset indices without further scientometric considerations, may cause errors. Therefore, it is advisable to apply several indices. And, provided they converge, we may accept the outcomes. The mentioned discrepancies may occur at evaluating individuals, primarily. To assess greater sets of papers, the impact of the errors is weeker.
The number of assessment processes using elite set indicators is growing permanently (Todeschini and Baccini 2016) although, the methodology and the validation of the indices is far from complete. Therefore the study of features of elite subsets calculated by different methods seems to be relevant.
In the present paper I try to compare some characteristics of different elite publication subsets in the field of scientometrics. The first step of the study was the search for the core publication channels in scientometrics.

revealing the leading journals publishing majority of journal papers in scientometrics and related fields, through calculating the frequency of journals in the hcore and πcore of the publications of Price medallists,

determining the share of the individual journals in the elit subsets of a combined set of papers in the leading scientometric journals,

determining the share of the individual journals in the elit subsets of the combined set of papers in the field obtained by key words: scientometr*, bibliometr*, informetr*, and webometr*,

comparing the share of papers in the individual elite subsets published by Price medallists.
Data and methods
For obtaining journals most frequently used for publication scientometric, bibliometric, informetric, and webometric papers, 19 still active Price medallists were choosen, as peers: T Braun, B Cronin, L Egghe, P Ingwersen, W Glänzel, L Leydesdorff, K W McCain, Ben R Martin, H F Moed, F Narin, O Persson, R Rousseau, A Schubert, H Small, M Thelwall, AFJ vanRaan, P Vinkler, H White, and M Zitt. The persons listed may be regarded as scientists with outstanding contribution to the field of quantitative studies of science (For the history and selection of Price medallists, see: Erfanmanesh and Moghiseh 2019 and Zhou et al. 2019).
Number and share (in per cent) of all papers in the πcore and hcore within the total publications of Price medallists by journal (The starting year of the journal is given in parentheses)
Journal  P(π)  P(h)  P(π) %  P(h) % 

Scientometrics (1978)  77  241  39.69  45.47 
JASIS(T) (1950)  57  140  29.38  26.42 
Information Proc. Manag. (1975)  11  22  5.67  4.15 
J. Informetrics (2007)  9  27  4.64  5.09 
J. Doc. (1945)  9  20  4.64  3.77 
J. Information Science (1979)  9  19  4.64  3.58 
Ann. Rev. of Inform. T. (2002)  6  8  3.09  1.51 
Research Policy (1971)  4  8  2.06  1.51 
Library Trends (2005)  2  4  1.03  0.75 
Research Evaluation (1991)  1  5  0.52  0.94 
Other journals  9  36  4.54  6.79 
Total  194  530  99.90  99.98 
The field of activity in scientometrics, bibliometrics, informetrics, and webometrics (termed shortly as SBIW or scientometrics) may be defined as follows: “Scientometrics is a field of science dealing with the quantitative aspects of people, matters and phenomena in science, and their relationships, but which do not primarily belong within the scope of a particular scientific discipline” (Vinkler 2001; Vinkler 2010a, b). It is widely accepted that scientometrics is preferably concerned with the quantitative aspects of the generation, propagation, and utilization of scientific information (Braun et al. 1987).
In selecting journals on scientometrics, the Price medallists were regarded, as reviewers or peers. For obtaining the most relevant, recent publication channels, only Price medallists recently active were selected.
It is well known, the number of scientists and publications increases in the field dynamically (see e.g. Hood and Wilson 2001). Whereas the first journal (Scientometrics) devoted exclusively to scientometrics was launched by T. Braun already in 1978, one of the most important journals in the field (Journal of Informetrics) started only in 2007. Another important periodical, Annual Review of Information Science and Technology published its first volume in 2002 and Library Trends in 2005. The dynamic increase in the number of journal papers on scientometric topics can be demonstrated by the following data.
The number of papers published on the topics: scientometr* OR bibliometr* OR informetr* OR webometr* in time periods 1978–1987, 1988–1997, 1998–2007, and 2008–2017 were collected from Clarivate Analytics Web of Science All Databases, on 31. 05. 2019. The total number of papers is as follows: 260, 873, 3268, 12,858, respectively. Accordingly the ratios: 1.00, 3.36, 12.57, 49.45. The data clearly show the increase in journal publications in the field. This finding may indicate the selection of the still active scientometricians for obtaining core journals of the field.
The publications and citations of the Price medallists were obtained from Web of Science All Databases 1975–2017 on 15. 02. 2018. Table 1 summarizes the methods applied for calculating elite subsets.
Total number of publications and number of hcore papers in the selected journals, share of SBIW papers in the hcore, and approximate number of SBIW papers in the whole journal
Journal  Total number of papers in the journal  Total number of papers in the hcore  Number of SBIW papers in the hcore  Per cent of SBIW papers in the hcore  Approximated total number of SBIW papers in the journal 

Scientometrics  5144  95  95  100.00  5144 
JASIS(T)  5135  120  38  31.78  1632 
Res. Policy  3182  192  26  13.74  437 
J. Informetrics  805  53  53  100.00  805 
Ann. Rev. of Inform. T.  428  51  5  9.80  42 
Information Proc. Manag.  3308  79  11  13.92  460 
J. Information Science  2113  56  8  13.79  291 
J. Doc.  3082  63  7  11.11  342 
Library Trends  1944  37  5  12.50  243 
Research Evaluation  547  32  32  100.00  547 
Libri  1074  21  1  4.76  51 
Number of papers and citation frequency limits of the elite (core) subsets in the combined set of scientometric, bibliometric, informetric and webometric papers (field) in the selected journals in Table 3
Elite set  Number of papers  Citation limit  Share in field (%)  Mean citation rate, MCR  SD  Mean publishing year  SD  Ratio of MCR data 

E/1  30  ≥ 287  0.30  471.87  267.77  1997.21  10.36  1.00 
E/2  54  ≥ 225  0.54  375.52  226.19  1997.91  9.90  0.80 
E/3  100  ≥ 164  1.00  292.90  192.53  1999.74  8.80  0.62 
E/4  117  ≥ 150  1.17  269.46  182.67  2000.27  8.53  0.57 
E/5  137  ≥ 138  1.37  251.17  174.46  2000.44  9.04  0.53 
E/6  225  ≥ 100  2.25  197.88  151.53  2000.81  8.74  0.42 
Number and distribution (in percentage share) of publications in the elite subsets in the combined set of publications from the selected journals
Journal  E/1 number %  E/2 number %  E/3 number %  E/4 number %  E/5 number %  E/6 number %  Mean number % 

Scientometrics  9 30.00  22 40.74  43 43.00  53 45.30  59 43.07  90 40.00  46.00 40.35 
JASIS(T)  10 33.33  13 24.07  20 20.00  23 19.66  31 22.63  60 26.67  26.17 24.39 
Res. Policy  8 26.67  10 18.52  15 15.00  16 13.68  17 12.41  27 12.00  15.17 16.38 
J. Informetrics  0 0.00  2 3.70  8 8.00  10 8.55  12 8.76  19 8.44  8.50 6.24 
Ann. Rev. of Inform. T.  1 3.33  2 3.70  4 4.00  4 3.42  5 3.65  9 4.00  4.16 3.68 
Information Proc. Manag.  1 3.33  2 3.70  3 3.00  3 2.56  3 2.19  7 3.11  3.17 2.98 
J. Information Science  0 0.00  0 0.00  2 2.00  2 1.71  4 2.92  5 2.22  2.17 1.48 
J. Doc.  1 3.33  3 5.56  3 3.00  2 1.71  4 2.92  5 2.22  3.00 3.12 
Library Trends  0 0.00  1 1.85  1 1.00  2 1.71  3 2.19  3 1.33  3.33 1.30 
Research Evaluation  0 0.00  0 0.00  2 2.00  2 1.71  2 1.46  3 1.33  1.50 1.08 
Libri  0 0.00  0 0.00  0 0.00  0 0.00  0 0.00  1 0.44  0.17 0.07 
Total number of papers  30  54  100  117  137  225  
Number and per cent of papers by Price medallists  15 50.00  30 55.56  41 41.00  50 42.74  58 42.34  93 41.33  47.83 45.50 
To obtain the number of scientometric papers in the journals, the title and, if it was necessary, also the abstract of the papers in the elite sets of the studied journals was surveyed whether their content would correspond to the definition of the scientometric studies given above. Accordingly, the papers were selected into two different sets; SBIW: [scientometric, bibliometric, informetric, and webometric publications] and LIMP: [other publications]. The term “other” means nonscientometric topics, e.g. library science, research topics on information retrieval, data banks, research policy, social sciences, research management, economics, etc.). The publications in Scientometrics, Journal of Informetrics and Research Evaluation were not surveyed individually. All publications in those journals were attributed to the [scientometric, bibliometric, informetric, and webometric] field.
Number of papers in the elite subsets in scientometric (i.e. scientometric, bibliometric, informetric, and webometric) and nonscientometric journals in the publication set obtained by key words (scientometr* OR bibliometr* OR informetr* OR webometr*) (field) from WoS all databases
E/1 P(πv)  E/3 P(π)  E/5 P(h)  E/6 P(HCP ≥ 100)  

Papers in scientometric journals  8  58  63  125 
Papers in nonscientometric journals  24  72  85  157 
Total  32  130  148  282 
Number and (per cent) of papers authored by Price medallists  6 (18.75)  35 (26.92)  36 (24.32)  57 (20.21) 
Per cent of papers in scientometric journals  25.00  44.62  42.57  44.33 
Per cent of papers in nonscientometric journals  75.00  55.38  57.43  55.67 
Mean citation rate (MCR) of papers SD  626.49 403.32  313.10 268.67  293.94 257.90  210.19 206.55 
Mean publishing year SD  2004.6 6.8  2003.8 6.9  2004.1 6.8  2004.3 6.7 
MCR in scientometric journals SD  427.50 131.51  237.98 97.85  233.72 97.23  173.99 91.00 
MCR in nonscientometric journals SD  692.83 432.82  373.61 336.52  338.57 319.57  239.01 261.33 
Number and (per cent, %) of papers in the elite subsets of the field (see Table 6) by the selected journals
Journal  E/1 P(πv) = 32  E/3 P(π) = 130  E/5 P(h) = 148  E/6 P(HCP ≥ 100) = 282 

Scientometrics  4 (12.50)  26 (20.00)  28 (18.92)  55 (19.50) 
JASIS(T)  2 (6.25)  9 (6.92)  12 (8.11)  30 (10.64) 
Res. Pol.  0  6 (4.62)  6 (4.05)  11 (3.90) 
J. Informetrics  0  6 (4.62)  6 (4.05)  8 (2.84) 
Inf. Process. M.  0  1 (0.77)  1 (0.68)  6 (2.13) 
J. Inform. Sci.  0  3 (2.31)  3 (2.03)  5 (1.77) 
Ann. Rev. Inf. S. T.  0  3 (2.31)  3 (2.03)  4 (1.42) 
J. Document.  2 (6.25)  3 (2.31)  3 (2.03)  3 (1.06) 
Library Trends  0  1 (0.77)  1 (0.68)  2 (0.71) 
Libri  0  0  0  1 (0.35) 
Journals listed  8(25.00)  58(44.62)  63(42.57)  125(44.33) 
Share of the “Big four” in total (%)  (18.75)  (36.15)  (35.14)  (36.88) 
The statistical analyses were performed by STATISTICA data analysis software system version 13, TIBCO Software Inc. Palo Alto, USA.
Results and discussion
Elite sets of publications of Price medallists and core journals in scientometrics, bibliometrics, informetrics, and webometrics
The hindex is believed to represent the measure of scientific impact of a set of papers (attributed to individuals, teams, countries, and journals, etc.) both qualitatively and quantitatively (Iglesias and Pecharroman 2007). The index is equal to the highest rank number of the paper with citation frequency equal to (or higher than) the citation rank of the paper in the set analysed. Consequently, the hindex can be regarded as the size of the elite subset within the whole set studied. According to the πmethod, the number of papers with highest impact in the studied set may be approximated by the square root of total papers (Table 1).
The publications and citations of 19 Price medallists in 1975–2017 were collected and the size of the corresponding hset and πset of each scientist was calculated according to the mentioned methods (Table 1). The papers of the individual scientists in these elite sets were classified according to the publishing journals (Table 2). The journals publishing only two or three elite papers are given below the table.
It is supposed that prominent scientists of a scientific field publish their most relevant results in the most relevant journals of the field, preferably. And, the journals having published the papers in the elite subsets of Price medallists, they may represent the relevant information base of the corresponding field appropriately. Therefore, the journals publishing majority of highly cited papers of eminent scientists may be regarded as the core journals of the field.
The total number of papers in the hcore of the Price medallists is 530 of which majority (494) is published in the scientometric journals listed in Table 2. The total number of πcore papers is significantly less: 194. The listed 10 journals contain 95.36 per cent and 93.21 per cent of all papers in πcore and hcore, respectively. From the πcore and hcore papers only 9 (4.64%) and 36 publications (6.79%), respectively appeared in journals not listed in Table 2. From the data in Table 2 it follows that in publishing Price medallists’ papers with potentially high impact, Scientometrics is preferred (39.69 and 45.47 per cent of all papers in the P(π) and P(h) subset, respectively). JASIS(T) figures as second with 29.38 and 26.42 per cent, respectively, whereas Journal of Informetrics with 4.64 and 5.09 per cent, respectively. It should be mentioned that the latter journal started only in 2007.
From the above data it may be concluded, the journals containing most relevant information in the field of scientometrics, bibliometrics, informetrics, and webometrics in the studied period, are the following: Scientometrics, JASIS(T), Information Processing Management, and Journal of Informetrics, at least according to the publication strategy of Price medallists. The mentioned journals contain 81.13 and 79.38 per cent of total papers in the corresponding hcore and πcore, respectively.
The list of the core journals suggested in the present study is similar to that given as the preferred set of journals on “bibliometrics, scientometrics, and informetrics” based on DIALOG’s rankings by Hood and Wilson (2001). The mentioned authors found in the field 4697 papers in 1960–2000. Among the most frequently used 10 journals suggested here (Table 2), in the list of the mentioned authors Scientometrics ranks first with 1197 records, JASIS as second with 319, Information Processing & Management as fourth with 128, Journal of Information Science as fifth with 127, and Journal of Documentation as sixth with 109 records. Two further journals are mentioned in both studies: Library and Information Science Research as 10th with 59 records and Library Trends as 16th with 42 records). The difference between the two lists may be caused by the different journal selection processes of the studies. The present study takes into account only SBIW papers defined as in the introduction, whereas Hood and Wilson (2001) extended their study towards library science and information science with topics far from scientometrics. Accordingly, their list covers also e.g. NauchnoTekhnicheskaya Informatsiya Series 1&2 as 3rd with 285 records, and Revista Espanola de Documentacion Cientifica as 7th with 95 records.
Moaghali et al. (2011) studied the literature of “scientometrics” in WoS in 1980–2009. They found 691 documents in this time period. The mentioned authors listed also the top ten most productive journals of the field. From those journals six are common with the present study: Scientometrics (320 records), Information Processing & Management (20 records), Journal of Information Science (20 records), JASIS (14 records), Journal of Documentation (9 records), and Reserch Evaluation (8 records).
More recently MartínMartín et al. (2016) investigated authors, documents and journals in the field of [bibliometrics, scientometrics, informetrics, webometric, and altmetrics] applying Google Scholar Citations in 1969–2015. The top eight most influential journals in their paper (Table 4) are the following (the number of articles found is given in brackets): Scientometrics (284), JASIST (137), Research Policy (57), Journal of Informetrics (36), Journal of Documentation (25), Information Processing & Management (24), Journal of Information Science (20), and Research Evaluation (18). One more journal (Library Trends) from Table 2 of my study figures with 7 records in the list of the mentioned authors. In summary, out of the 10 journals in Table 2 of the present study, 9 journals are involved in the corresponding list of MartínMartín et al. (2016). Only a single journal (Ann. Rev. of Inform T.) fails.
It should be noted that the set of journals classified under the name Library & Information Science by WoS does not contain Research Policy. Out of the Price medallists studied only a single person (Ben R. Martin) published in Research Policy papers belonging to the elite sets. Nevertheless, it is known that this journal publishes many highly cited papers dealing with patent statistics and cooperation between industry and university, and relations between GDP and scientific publications of countries, etc. Therefore a substantial part of the publications in Research Policy should also be taken into account in studying publications in the SBIW field.
Defining the part of SBIW papers in the individual journals
The number of publications on SBIW topics in the journals (Table 2) containing most frequently cited papers of Price medallists were determined. The set of journals was completed with a further journal (Libri). One of the papers in this journal e.g. was found in the combined elite set, P(h) of papers in the selected periodicals.
It is wellknown; some of the listed journals publish papers not only on scientometrics but also on other topics. Therefore the hcore publications in the selected journals (Table 3) were classified wheather they belong to SBIW field [scientometric, bibliometric, informetric, and webometric publications] or LIMP field [library science, research topics on information retrieval, data banks, research policy, social sciences, research management, economics, etc. publications]. The immediate aim of this selection was to obtain the total number of papers in the SBIW field because calculating the size of the πvset and πset of the field requires the knowledge of the total number of publications. Three of the journals (Scientometrics, Journal of Informetrics, and Research Evaluation) however were accepted as publishing exclusively SBIW papers. The other journals were found to publish both SBIW and LIMP papers. The ratio of SBIW papers vs. LIMP papers in a whole journal was approximated by the ratio of the two types of papers in the hcore of the journal. E.g. JASIS(T) published 31.78 per cent and Journal of Information Science 13.79 per cent SBIW papers within the hcore in 1975–2017. Accordingly, only parts: 31.78% (1632) and 13.79% (291) of the total papers in that journals, respectively were attributed to the SBIW field in the mentioned period. The total number of SBIW papers in the combined set of the selected journals in 1976–2017 was calculated as 9994 (Table 3).
Characteristics of the elite subsets in the combined set consisting of SBIW papers in the selected journals
The total number of SBIW papers (9994) obtained from WoS, Core Collection in the selected 11 journals was regarded as a unified set. All the publications were ranked according to the decreasing number of citations obtained. Table 4 shows the number of journal papers in the individual elite subsets, as calculated by the methods in Table 1. The data in Table 4 reveal that the number of papers in the elite subsets increases: E/1 = 30 < E/2 = 54 < E/3 = 100 < E/4 = 117 < E/5 = 137 < E/6 = 225. (It is to be mentioned that the data for the individual journals show a similar trend, e.g. Scientometrics: 9, 22, 43, 53, 59, and 90, respectively.) Through selecting 1 per cent of total (9994), as papers belonging to the elite set, we obtain 99.94 (rounded as 100) papers. This value corresponds to E/3 (100), therefore it was not applied.
Table 4 shows also the number of citations (“citation limit”) obtained by the paper with the highest rank in the given core set. With the increase of the number of papers in the subsets, Mean Citation Rate (MCR) of the papers decreases. In E/1 obtained by the P(πv)method e.g., the number of papers is only 30, and MCR = 471.87. The greatest subset obtained by the HCPmethod (E/6, i.e. papers with citations equal to or higher than 100) contains 225 papers. The MCR value in this subset is 197.88. Each difference between the MCR values is significant at p ≤ 0.02, except for that between E/1–E/2; E/3–E/4; E/3–E/5 and E/4–E/5.

Etzkowitz, H. & Leydesdorff, L.: The dynamics of innovation: from National Systems and “Mode 2” to a Triple Helix of universityindustrygovernment relations. Research Policy, 29 (2) 109–123 (2000)
C = 1650

Katz, J. S. & Martin, B. R.: What is research collaboration? Research Policy, 26 (1) 1–18, (1997)
C = 921

Egghe, L.: Theory and practice of the gindex. Scientometrics, 69 (1) 131–152 (2006)
C = 707

Price, D. J. D.: General theory of bibliometric and other cumulative advantage processes. Journal of the American Society for Information Science, 27 (5–6) 292–306 (1976)
C = 698

White, H. D. & McCain, K. W.: Visualising a discipline: An author cocitation analysis of information science, 1972–1995. Journal of the American Society for Information Science, 49 (4) 327–355 (1998)
C = 521

Narin, F., Hamilton, K. S. & Olivastro, D.: The increasing linkage between US technology and public science. Research Policy, 26 (3) 317–330 (1997)
C = 510
Percentage share of papers in the elite subsets of the combined set of SBIW papers by journals
The data in Table 5 reveal the outstanding role of Price medallists in publishing highly cited papers in the studied journals. In average, the share of the most influential papers written by Price medallists is 45.50 per cent. The share in the smallest subsets, (in E/1 and E/2 containing most cited papers) is highest: 50.00, 55.56 per cent, respectively. The high share of the publications by Price medallists in the elite subsets may validate the calculation methods applied for obtaining elite sets and, at the same time the selection method of journals as leading information sources in scientometrics.
The number of publications written by Price medallists is also studied in the individual journals. In the elite subsets of Scientometrics e.g. P(πv), P(π) and P(h), the number and share of publications by Price medallists within the total in 1978–2017: 13 (48.15%), 36 (50.00%), and 43 (45.26%), respectively. The same data for JASIS(T) in 1975–2017: 8 (36.36%), 17 (42.50%), and 30 (36.14%), for Journal of Informetrics in 2007–2017: 7 (36.84%), 10 (35.41%), and 19 (35.85%), and for Research Policy in 1975–2017: 2 (12.50%), 3 (14.29%), and 10 (12.20%), respectively.
For comparing the share of Price medallists among the authors in the elite subsets and in total publications, total number of scientific articles, reviews, short communications, letters, corrections, and notes were collected from 1978 (Vol. 1) to 2006 (Vol. 69), and from 2007 (Vol. 70) to 2018 (Vol. 117) in Scientometrics. In the first period a total of 2045 and in the second 3283 publications, respectively were found. From that 408 and 279 papers, respectively were signed by a single or more Prize medallists. Accordingly the ratio of Prize medallists’ papers: 19.95 and 8.50 per cent, respectively. The ratio for the total timeperiod: 12.89 per cent. In Journal of Informetrics the total number of papers: 888 in 2007–2018. The number of articles published by Price medallists: 159. Accordingly the ratio of papers written by Price medallists: 17.91 per cent.
The mentioned data, compared to that referring to the elite sets, clearly show the higher rate of papers signed by Price medallists in the elite subsets.
Percentage share of papers in the elite subsets of the SBIW field by journals
The journal papers on SBIW topics were collected also according to the following keywords: TS = (scientometr* OR bibliometr* OR informetr* OR webometr*) in WoS All Databases for the period 1975–2017. The total number of papers was found as 16,992. Accordingly, the number of papers in the elite subsets (see Table 1) is the following: P(πv) = 32, P(π) = 130, P(h) = 148, and P(HCP ≥ 100) = 282 (Table 6). The number of papers (16,992) in the SBIW field obtained by the mentioned keywords is naturally higher than that calculated as the sum of SBIW papers in the selected 11 periodicals (9994) (see Table 3). The difference in the number of total papers compared to the combined set of papers in the selected journals, explains the difference in the size of the elite subsets.

Publications on SBIW topics published in scientometric journals (i.e. journals devoted to partly or exclusively to scientometrics, bibliometrics, informetrics, and webometrics), and

Publications in journals devoted to primarily life sciences, natural sciences, psychology, etc. (named as nonscientometric journals).
The data in Table 6 show that the majority of highly cited scientometric results (i.e. publications in the elite subsets) are published in nonscientometric journals (P(πv): 75.00%, P(π): 55.38%, P(h): 57.43%, P(HCP ≥ 100): 55.67%,). The percentage rate of highly acknowledged papers on SBIW topics in scientometric journals is only 25.00% in the elite set, P(πv) but, it is higher than 45% in the other elite sets, neither. It should be remembered that journal publications in P(πv) represent the most highly cited share of papers.
The values of mean citation rate (MCR) in Table 6 reveal that the scientometric papers in the elite subsets of the field published in nonscientometric journals are more frequently cited than that in journals devoted to exclusively or partly to scientometrics. The difference between the MCR values of E/3: P(π), E(5): P(h), and E/6: P(HCP ≥ 100) is highly significant at p ≤ 0.02, whereas that between E/3, E/5, and E/6 with E/1: P(πv) is not (p > 0.10). The latter result may be attributed to the low number of papers in the P(πv) set in scientometric journals (8). The citation rate of papers in the elite subset, E/1: P(πv) in the scientometric journals of the field is 427.50. This value does not differ significantly from that calculated for the combined set of papers in the leading journals in Table 4 (471.87). The trend in the citation rate of papers in the elite sets (E/1, E/3, E/5, E/6) in Table 6 corresponds to the trend shown in Table 4 for E/1–E/6.
The reasons for the higher citedness of scientometric papers in nonscientometric journals was not analysed in details. I guess however, in most cases those scientometric papers in nonscientometric journals may be highly cited which find interest not only by professional scientometricians but a wider public consisting of science politicians, researchers in natural and social sciences, or librarians and information scientists.
Table 6 also shows the share of papers in the elite subsets published by Price medallists. It is in average: 22.55%. The share of papers written by Price medallists is significantly higher (in average: 45.50%) in the elite publication subsets taking into account the papers in the 11 selected scientometric periodicals (Table 5). This finding is in accordance with the data in Table 2 which indicate, the decisive share of highly cited papers of Price medallists are published in the selected scientometric journals: 185 out of 194 and 494 out of 530 in P(π) and P(h) set, respectively.
It is known that 1 per cent of total publications is frequently regarded as the elite in a publication set. In the SBIW set consisting of 16,992 papers: 1% = 170 (rounded). The value is between the limits: P(h) = 148 and P(HCP ≥ 100) = 282. The mean citation rate (MCR) of the 170 papers: 275.22 (SD: 245.77). The number of papers authored by Price medallists: 44 (25.8%). Out of the 170 papers 74 papers (43.53%) were found in scientometric and 96 (56.47%) in nonscientometric journals. The mentioned data are in accordance with that given in Table 6.
Comparison with the data from Scopus data bank
The publications in SBIW field in 1975–2017 were collected also from Scopus data bank using the same keywords as for WoS (scientometr* OR bibliometr* OR informetr* OR webometr*).
The total number of papers: 18,206 (08 12 2018.). Accordingly, the size of the elite subsets: P(πv): 33, P(π): 135, P(h): 159, P(C ≥ 100): 348. The size of the subsets is somewhat larger than that for WoS (32, 130, 148, and 282, respectively).
The OM values for the elite subsets are the following: OM/P(πv)/= 93.85%, OM/P(π)/= 87.17%, OM/P(h)/= 82.39%. The data would indicate that the overlap of the publications in the elite subsets decreases with increasing number of publications in the corresponding set. Because of the high similarity, no detailed studies of the elite subsets obtained from Scopus were made.
Conclusions

a new method for determining leading journals in scientometrics.

calculating the size of elite subsets by different methods,

comparing the mean citation rate of journal papers in the elite subsets,

determining the frequency of journals in the elite subsets, and

calculating the rate of journal papers published by Price medallists.
For obtaining leading journals in scientometrics, the publishing journals of the most cited publications of Price medallists were collected. It is assumed namely that the journals containing most cited publications of eminent scientists in a field may represent the most influential information channels of the field.

revealing most relevant information channels, hot topics in science fields, most productive and highly cited scientists, laboratories, and countries in fields or on topics,

calculating different scientometric indicators from data of the elite subsets of different thematic units or organizations for evaluation purposes, and

calculating standards (e.g. world average citation rate in different fields, mean number of highly cited papers of university chairs in a given country, mean number of citation rate of teams working in a field within a country, etc.) referring to elite subsets for evaluating the publication activity of individuals, teams or countries, comparatively.
The data in the present paper show, majority of highly influencing scientometric results are published in nonscientometric journals (Table 6). Nevertheless, the high share of publications written by Price medallists in the elite subsets of the combined set of special journals in scientometrics (P(πv): 50.00%, P(π): 41.00%, and P(h): 42.34%) (Table 5) may indicate the eminent role of the journals devoted to fully or partly to scientometrics. From these periodicals any of the “big four” (Scientometrics, Journal of Informetrics, JASIS(T), and Research Policy) publish significantly more papers in the elite subsets than any of the other journals (Table 7). The share in the most influential papers is smaller for Journal of Information Science, Journal of Documentation and Annual Review of Information Science and Technology than that for journals belonging to the “big four”. Some multiscience journals (e.g. Nature, Science, Proceedings of the National Academy of the United States) or journals in life sciences (e.g. British Medical Journal, Journal of the American Medical Association) may be also important sources of scientometric information.
The results in the present paper may contribute to the verification of applying elite set indicators in scientometric assessments. The Price medallists namely are regarded as eminent scientists in the field. They may be assumed as peers in the corresponding field. The relatively high share of papers published by Price medallists in the elite subsets of journals on scientometrics, bibliometrics, informetrics, and webometrics may prove the relative excellence of the papers in these subsets. Consequently, the two factors: the high share of papers of the peers in the elite subsets and the selection methods for obtaining the elite sets, they may mutually validate each other.
It may be assumed that similar methods with detecting the publication strategy (PS) (Vinkler 1997) of highly influential scientists may be applied for obtaining core publication sources and elite set standards in different fields. An appropriate selection method would be the calculation of PS of highly decorated scientists or persons acting as editors or editorial board members of journals in a corresponding field (Vinkler 2017b).
The present study reveals, the size of elite subsets strongly depends on the selected method. The number of papers in the subsets increases in the rank: P(πv) < P(π) < P(h), in general. The mean citation rate of the publications in the subsets was found to decrease with increasing number of papers.
One of the main goals of evaluative scientometrics is to present methods and indicators for science managers and science policy makers applicable in the practice. Several papers in the literature (e.g. Bornmann et al. 2011; Wildgaard et al. 2014; Leydesdorff and Wagner 2014; Vinkler 2017a, b) indicate, the assessment of publications of individuals, teams or countries may be built on indices obtained from elite subsets. Selecting appropriate methods for obtaining elite subsets, it depends strongly on the size and topic of the publication set to be analysed. Earlier experiences (Vinkler 2010a, b, 2017a) would indicate that analysing publications would require preferably, at least 25–30 journal papers. Therefore the application of the πvmethod is questionable if the size of the set to be analysed would be lower than the mentioned limit.
Both h and πindex strongly depend on the field. Therefore, for comparing publication activity in different fields through elite subsets, the application of fieldindependent methods are preferably recommended. Calculating the citation limit of highly citedness, e.g. the number of citations (Ci) equal to or higher than the corresponding field average (Ci ≥ MCR) or its square (Ci ≥ MCR^{2}) may be recommended. Naturally, if the skewness of citedness is high, median values can be preferably used. As fieldindependent citation limit, the application of the most frequently cited 0.1 or 1, or 10 per cent of total papers may also be applied.
The similar high rate of papers in the elite publication subsets published by Price medallists (Tables 5, 6) indicates, for calculating scientometric indicators, any of the subsets presented here may be applied except for the P(πv)method because of the relatively small size of the set. Accordingly, for selecting appropriate elite subset for evaluation, primarily the following factors should be taken into consideration: size of the set to be studied, thematic coverage with the set applied as standard, and aims of the assessment.
Notes
Acknowledgements
Open access funding provided by MTA Research Centre for Natural Sciences (MTA TTK).
References
 Aksnes, D. W. (2003). Characteristics of highly cited papers. Research Evaluation, 12, 159–170.CrossRefGoogle Scholar
 Bornmann, L., Leydesdorff, L., & Mutz, R. (2013). The use of percentiles and percentile rank classes in the analysis of bibliometric data: Opportunities and limits. Journal of Informetrics, 7, 158–165.CrossRefGoogle Scholar
 Bornmann, L., & Marx, W. (2014). How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations. Scientometrics, 98, 487–509.CrossRefGoogle Scholar
 Bornmann, L., Mutz, R., Hug, S. E., & Daniel, H.D. (2011). A multilevel metaanalysis of studies reporting correlations between the h index and 37 different h index variants. Journal of Informetrics, 5, 346–359.CrossRefGoogle Scholar
 Braun, T., Bujdosó, E., & Schubert, A. (1987). Literature of analytical chemistry: A scientometric evaluation. Boca Raton, FL: CRC Press.Google Scholar
 Egghe, L. (2006). Theory and practice of the gindex. Scientometrics, 69, 131–152.CrossRefGoogle Scholar
 Egghe, L. (2007). Dynamic hindex: The Hirsch index in function of time. Journal of the American Society for Information Science and Technology, 58, 452–454.CrossRefGoogle Scholar
 Erfanmanesh, M., & Moghiseh, Z. (2019). How winning an international scientific award affects publishing behaviour of laureates: The case of Derek de Solla Price medal in scientometrics. Publishing Research Quarterly, 35, 201–212.CrossRefGoogle Scholar
 Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102, 16569–16572.CrossRefzbMATHGoogle Scholar
 Hood, W. W., & Wilson, C. S. (2001). The literature of bibliometrics, scientometrics, and informetrics. Scientometrics, 52, 291–314.CrossRefGoogle Scholar
 Iglesias, J. E., & Pecharroman, C. (2007). Scaling the hindex for different scientific ISI fields. Scientometrics, 73, 303–320.CrossRefGoogle Scholar
 Leydesdorff, L. (2012). Alternatives to the journal impact factor: I3 and top10% (or top25%?) of the most highly cited papers. Scientometrics, 92, 355–365.CrossRefGoogle Scholar
 Leydesdorff, L., & Wagner, C. S. (2014). The European Union, China, and the United States in the top1% and top10% layers of most frequently cited publications: Competition and collaborations. Journal of Informetrics, 8, 606–614.CrossRefGoogle Scholar
 MartínMartín, A., OrdunaMalea, E., Ayllón, J. M. & Delgado LópezCózar, E. (2016). The counting house: Measuring those who count. Presence of Bibliometrics, Scientometrics, Informetrics, Webometrics and Altmetrics in the Google Scholar Citations, ResearcherID, ResearchGate, Mendeley & Twitter. EC3 working papers, 21. 19th of January 2015. 60 pages, 12 tables, 35 figures.Google Scholar
 Moaghali, A., Alijani, R., Karami, N., & Khasseh, A. (2011). Scientometric analysis of the scientometric literature. International Journal of Information Science and Management, 9, 19–31.Google Scholar
 Plomp, R. (1990). The significance of the number of highly cited papers as an indicator of scientific prolificacy. Scientometrics, 19, 185–197.CrossRefGoogle Scholar
 Schreiber, M. (2010). Twenty Hirsch index variants and other indicators giving more or less preference to highly cited papers. Annalen der Physik (Berlin), 52, 536–554.CrossRefGoogle Scholar
 Schreiber, M., Malesios, C. C., & Psarakis, S. (2012). Exploratory factor analysis for the Hirsch index, 17 htype variants, and some traditional bibliometric indicators. Journal of Informetrics, 6, 347–358.CrossRefGoogle Scholar
 Seglen, P. O. (1992). The skewness of science. Journal of the American Society for Information Science, 43, 628–638.CrossRefGoogle Scholar
 Todeschini, R., & Baccini, A. (2016). Handbook of bibliometric indicators—Quantitative tool for studying and evaluating research (pp. 1–512). Weinheim: WileyVCH.zbMATHGoogle Scholar
 Vinkler, P. (1997). Relation of relative scientometric impact indicators. The Relative Publication Strategy index. Scientometrics, 40, 163–169.CrossRefGoogle Scholar
 Vinkler, P. (2001). An attempt for defining some basic categories of scientometrics and classifying indicators of evaluative scientometrics. Scientometrics, 50, 539–544.CrossRefGoogle Scholar
 Vinkler, P. (2007). Eminence of scientists in the light of the hindex and other scientometric indicators. Journal of Information Science, 33, 481–491.MathSciNetCrossRefGoogle Scholar
 Vinkler, P. (2009). The πindex. A new indicator for assessing scientific impact. Journal of Information Science, 35, 602–612.CrossRefGoogle Scholar
 Vinkler, P. (2010a). The evaluation of research by scientometric indicators (pp. 1–313). Oxford: Chandos Publishing.CrossRefGoogle Scholar
 Vinkler, P. (2010b). The π _{v}index: A new indicator to characterize the impact of journals. Scientometrics, 82, 461–475.CrossRefGoogle Scholar
 Vinkler, P. (2013). Quantity and impact through a single indicator. Journal of the American Society for Information Science and Technology, 64, 1084–1085.CrossRefGoogle Scholar
 Vinkler, P. (2017a). The size and impact of the elite set of publications in scientometric assessments. Scientometrics, 110, 163–177.CrossRefGoogle Scholar
 Vinkler, P. (2017b). Core indicators and professional recognition of scientometricians. Journal of the Association for Information Science and Technology, 68, 234–242.CrossRefGoogle Scholar
 Wildgaard, L., Schneider, J. W., & Larsen, B. (2014). A review of the characteristics of 108 authorlevel bibliometric indicators. Scientometrics, 101, 125–158.CrossRefGoogle Scholar
 Zhou, Chunlai, Kong, Xiangyi, & Lin, Zhipeng. (2019). Research on Derek de Solla Price medal prediction based on academic credit analysis. Scientometrics, 118, 159–175.CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.