Since their introduction, citation analyses have simply counted the number of times papers have been cited in citation indexes such as Web of Science (WoS, Clarivate Analytics) or Scopus (Elsevier). Bu et al. (2019) call this typical form of citation analysis one-dimensional impact measurement. However, a trend has recently appeared in bibliometrics that complements the one-dimensional with a multi-dimensional form. On the one hand, this trend refers to citation-context analyses, which can be undertaken more easily since data from PLOS, PubMed Central, Elsevier, and Microsoft Academic have been made available (Bertin et al. 2016). Citation context data are used to differentiate between cited papers, which are essential or peripheral for the citing authors (Bornmann and Daniel 2008), or to explore the certainty of the knowledge that is reported in cited papers (Small 2018). On the other hand, the multi-dimensional form of impact measurement refers to new indicators that use techniques such as co-citation analysis and bibliographic coupling to reveal the broadness and depth of impact (Bu et al. 2019).

Wu et al. (2019) recently proposed a new (multi-dimensional) citation-based index of ‘disruptiveness’ that is based on the dynamic network measure of technological change introduced by Funk and Owen-Smith (2017). Azoulay (2019) describes the index aptly and simple as follows: “when the papers that cite a given article also reference a substantial proportion of that article’s references, then the article can be seen as consolidating its scientific domain. When the converse is true—that is, when future citations to the article do not also acknowledge the article’s own intellectual forebears—the article can be seen as disrupting its domain”. Wu et al. (2019) present several statistical analyses (e.g. a comparison of the index with expert notions of disruption and development) that appear to prove that the disruption index does in fact measure what it intends to measure: new insights, ideas or methods that disrupt the cumulative nature in a scientific field.

Table 1 shows those papers published in Scientometrics that have the highest (positive) disruption index. The bibliometric data used are from an in-house database developed and maintained by the Max Planck Digital Library (MPDL, Munich) and derived from the Science Citation Index Expanded (SCI-E), Social Sciences Citation Index (SSCI), Arts and Humanities Citation Index (AHCI) prepared by Clarivate Analytics. We have only included papers from 2000 to 2010 to ensure a long citation window (from the date of publication up to 2017). Because the in-house database only includes publications published since 1980, and only linked cited references can be considered when calculating the disruption index, we focused on papers published in 2000 or later. To ensure a sufficient number of cited references and citations to calculate the disruption index, we only considered papers with at least 10 linked cited references and citations.

Table 1 Thirty articles published in Scientometrics between 2000 and 2010 with the highest (positive) disruption index

If we consider the disruption index values in Table 1, it is noticeable that most of the values are relatively small and close to zero. The paper by Bordons et al. (2002) has the highest index value (at 0.13). The paper deals with the use of the popular Journal Impact Factor (JIF) in research evaluation. It focusses on the problematic use of the metric in the analysis of peripheral countries whose national journals are not covered in the WoS database. The index value of the paper is much lower than the index value of the paper by Bak et al. (1987), which has been mentioned by Wu et al. (2019) as a paper with an exceptionally high disruption index (0.86). However, the paper by Bak et al. (1987) was published in the 1980s, which may prove to be an advantage for receiving high disruption index values. In order to compare the disruption index values in Table 1 with reference values, we generated the index values for all papers (only papers with at least 10 linked cited references and citations) published between 2000 and 2010. The threshold for inclusion in the top 1% papers with the highest positive disruption index is 0.0268293. This means that the first 14 papers in Table 1 are among the top 1% disruptive papers in the publication years considered in this study.

The concentration of disruption values around zero in our set of Scientometrics papers as well as the percentiles of the comparison set indicate that a notable variance in the disruption index can only be found for a small proportion of papers. It would appear that the index is particularly suited to discriminate between papers at the extreme ends of its distribution. One factor that results in the high concentration around zero is that the index value is dominated by the cited side of the focal articles, more specifically the number of citing references of the focal paper’s cited references. This implies that the citing behavior of an article’s author has a great effect on its disruption index. The behavior of citing authors has to be considered when using this index to compare the disruptiveness of different papers. One possibility in this respect would be to field-normalize the index. Beyond that, a further examination of how the citation behavior of the focal paper’s authors affects the disruptiveness values (e.g. the influence of the number of cited references) would allow a better interpretation of the index values.