This past July, an international team of researchers and publishers published a proposal that academic journals share their citation distributions to encourage authors, publishers and institutions to look beyond using single numerical metrics for an entire journal as a proxy for the research quality of individual articles in it [1]. We embrace this effort and include the citation distribution that contributed to this Journal’s 2015 Thomson Reuters Impact Factor.

A journal impact factor (JIF) is a simple ratio. The numerator is the number of citations a journal receives in a particular calendar year to ‘citable items’ with a publication date from the previous two years (the ‘citation window’). The denominator is the number of citable items in that citation window. Citable items include reviews and original research, which Thomson Reuters classify as ‘articles’. Editorials, such as this one, are classified as ‘editorial material’ and are not counted in the denominator. Although the classification protocol has been defined [2], there is still a lot of grey area [3],Footnote 1 particularly in publications that fall somewhere between scientific journals and society membership magazines.Footnote 2

In 2013 and 2014, the Journal of Materials Science published 1818 items, comprising 1768 articles (97.2 % of content), 44 reviews (2.4 %), 5 editorials and 1 correction. The citation distribution pattern for articles and citations that contributed to our 2015 JIF is shown in Fig. 1. Citations to reviews and other items, shown in orange in the plot, contributed to about 10 % of our 2015 JIF.

Figure 1
figure 1

Distribution of citations from articles published in 2015 to articles published in the Journal of Materials Science in 2013 and 2014 (volumes 49 and 50). The orange segments at the top of each column represent the contribution from document types such as reviews or editorial content

The long tail clearly highlights the problem of using an arithmetic mean to describe such as skewed distribution.Footnote 3 About 70 % of articles published in 2013 and 2014 in the Journal of Materials Science were cited fewer times than the value of the 2015 JIF. Table 1 shows that this is consistent with three other materials science journals, regardless of JIF. This is also consistent with the observation from Larivière and co-workers in their survey of 11 other journals where 65–75 % of citable items had fewer citations than the JIF [1].

Table 1 Percentage of citable items published in four materials science journals with fewer citations than the value of their 2015 Thomson Reuters journal impact factor

Also in line with the findings from Larivière, we see that the cumulative citation distribution function for all these selected journals is nearly identical (Fig. 2). Each follows a pattern close to a Pareto distribution, like those used to model income distributions. This power-law probability distribution is most familiar in the specific case known as the ‘Pareto principle’ or ‘80–20 rule’. For example, 80 % of the world’s income goes to 20 % of the population, if the rule holds.Footnote 4 For article citations, we see something closer to a ‘70–30 rule’: 70 % of a journal’s citations come from about 30 % of the published articles.

Figure 2
figure 2

The cumulative percentage of articles published in four materials science journals in 2013 and 2014, and the citations to them in 2015. Modelled after Larivière et al. [1]

If all journals have a similar citation distribution regardless of the size of their JIF, then what is the problem with using this metric? The danger is the prejudice that this oversimplification engenders. By assuming the quality of an unread article based on the journal in which it appeared, we risk ignoring relevant work and not being up-to-date with one’s field. When a JMS Editor asked one contributor why he had not cited a relevant paper, his reply was ‘it was not in a high-impact journal’!

There is an additional corrosive effect on academic publishing, illustrated in Fig. 3: the ‘rich’ get richer and the ‘poor’ get poorer. Histograms of JIFs for materials science journals for any given year consistently fit a log-normal distribution (top panel). If you trace back the JIFs from the journals in each category, as illustrated in the bottom panel, you see two clear trends.

Figure 3
figure 3

Top panel: histogram showing the 2015 JIF (log scale) of journals categorised as ‘Materials Science, Multidisciplinary’ by Thomson Reuters. The grouping that currently contains the Journal of Materials Science is marked with an orange asterisk. Bottom panel: The trend in JIF values in this category over the past decade, plotted as the mean impact factor for the journals in each bar in the top panel. The median and mean JIF values for all journals in a given survey year are plotted as dashed and solid black lines, respectively

First, there is a steady increase in impact factors. The median impact factor has gone up by 0.6 (66 %) to 1.6 over the decade, while the mean (average) impact factor has increased by 1.2 (76 %) to 2.9 in the same period.Footnote 5

Second, the journals in groupings with JIFs below the median value (the ‘poor’) have seen their JIFs increase less than this and, in some cases, drop. This is like having a savings account with an interest rate below the inflation rate: though the balance looks bigger, its value is less than it was. For the journals with impact factors above 4 (the ‘rich’), which comprise about a sixth of the titles in the ‘Materials Science, Multidisciplinary’ category, the story is the opposite. Five of these titles have seen their impact factors at least double in the last 10 years, and each group has increased faster than the mean in absolute terms.

The scale in Fig. 3 is logarithmic: each grouping represents an impact factor increase of a third. This obscures the underlying stratification of JIFs in absolute, rather than relative, terms. At the high end, the gap is vast. The 2015 JIF for Nature Materials (red column) is 20 units higher than Advanced Materials (fuchsia column), even though they are second-nearest neighbours.

The glorification of the JIF also provides an incentive for researchers to oversell their results. Following this practice impoverishes our talent base and sets a poor example for the next generation of researchers. We are tacitly directing early career researchers to follow hot topics, rather than asking interesting questions regardless of the flavour of the month. Or, as our Editor-in-Chief has put it, the current situation is ‘like kindergarten kids playing soccer: you know where the ball is because that’s where all the kids are’.

Targeting journals based on impact factor, rather than remit or quality of review, increases the peer-review burden for editors and scientists as researchers. We all know that it is easy to resubmit the same article—often unmodified—to the next journal on the impact factor cascade and have another roll of the dice with the editors and referees. I suspect most academics feel they are approaching ‘peer review burnout’ because of this. We see the effects in the diminishing quality of reviews from time-strapped researchers.Footnote 6

So how do we get around this well-established problem? My next editorial will look at some of the emerging tools that give other measures of impact on an article level and an author level, including CASRAI’s CRediT taxonomy initiative and Project COUNTER.