The folly of impact factors
- 399 Downloads
The impact factor of a journal is based on the number of articles it publishes and the number of citations to those articles in the two calendar years that follow the year of publication. The idea is that citations are related to the impact of an article, an idea that has some truth. Because a high impact factor is seen as a quality mark, journals and authors strive for an high impact factor. This has resulted in a sort of folie a deux.
Journals have used many tools to increase the impact factor that are not related to quality of original research: a review issue (reviews attract many citations but have no original data, so do not say anything on quality of science), publishing reviews in January (so that almost an extra year of citations is counted for the impact factor), pushing authors to add citations to the journal before accepting an article, push authors to put their data in a letter (a letter does not count in number of articles, but citations do) to name a few. I recently discovered a few new ones.
To my surprise an article of mine that was accepted was put online immediately without editing, it was just the word-file. Although it is nice to have one’s data out rapidly, the editing takes normally only a week or so and gives a way more readable article. I does not seem this was done for the reader but I believe it was done to get more citations before publishing the full article, thereby enhancing the impact factor. Another thing I noticed when I submitted my list of publications of 2014 to my university year report: it was the lowest in many years. Indeed, I am growing older, less creative and probably less productive, nothing to worry about (young colleagues will bring science further). But when I looked more carefully I found that I have many articles published on-line and not yet in print. Had I been more active late 2015 than early? Then I saw that the output of the Department had decreased enormously and one of my colleagues told me that she had an already highly cited article online since January 2014, but in print in January 2015. What happens is this: journals put articles on line and decide to have the most cited ones in the next January issue, enhancing their impact factor substantially (but not the quality of the journal of course!).
Last year we learned that most work published in Science and Nature cannot be reproduced, but these journals are still seen as important for scientific reputations because of the high impact factor. I guess several of you have the same experience as I have: peer review of Science and Nature is more on news worthiness than on quality; if one looks simply at the quality of the histology in those journals it is clear that there is little peer review on quality of data. I have experienced that peer review in specialist journals is way better than in the high impact journals. Actually I am proud of the peer review for the Journal of Hematopathology. We have a very high rejection rate and the comments of the reviewer go in very much detail thus improving the quality of the accepted articles.
How to measure quality of science if impact factors of journals are not reliable? Not an easy question, but I will do some suggestions in the next editorial.