Advertisement

Scientometrics

, Volume 121, Issue 1, pp 387–398 | Cite as

Corrective factors for author- and journal-based metrics impacted by citations to accommodate for retractions

  • Judit Dobránszki
  • Jaime A. Teixeira da SilvaEmail author
Open Access
Article

Abstract

Citation-based metrics are frequently used to evaluate the level, or quality, of a researcher, or their work, often as a function of the ranking of the journal in which they publish, and broadly tend to be divided into journal-based metrics (JBMs) and author-based metrics (ABMs). Despite wide knowledge of the gaming of such metrics, in particular the Clarivate Analytics journal impact factor (JIF), no suitable substitute concept has yet emerged, nor has any corrective measure been developed. In a post-publication peer review world of increasing retractions, and within a framework of open science, we propose correction factors for JBMs and ABMs that take into account retractions. We describe ways to correct the JIF, CiteScore, the 5-year Impact Factor, Immediacy Index, Cited Half-Life, Raw Impact per Paper and other JBMs (Eigenfactor Score and Article Influence Score) as well as the h-index, one of the most widespread ABMs, depending on the number of retractions for that journal or individual, respectively. The existence of such corrective factors could make the use of these metrics more transparent, and might allow them to be used in a world that is adapting to an increase in retractions and corrective measures to deal with erroneous scientific literature. We caution that such correction factors should be used exclusively as such, and should not be viewed, or used, as punitive factors.

Keywords

Bibliometrics Citations CiteScore Clarivate analytics Correction h-index Journal impact factor Reference validation Retraction Scopus Web of science 

Academic metrics and their potential distortion by citing retracted literature

There are many metrics in science that have been used to quantify quality and impact and thereby to measure research impact and evaluate the scientific value of a journal and/or a researcher or even an institution or research group (Alonso et al. 2009; Kim and Chung 2018). Academic metrics can be fundamentally divided into two groups, journal-based metrics (JBMs) and author-based metrics (ABMs). The most widespread JBM is, even nowadays, the Clarivate Analytics journal impact factor (JIF) (Garfield 1972), for example in Sweden (Hammarfelt and Rushforth 2017), despite its limitations (Teixeira da Silva and Dobránszki 2017a), abuses (Teixeira da Silva and Bernès 2018), unwarranted use in academic policy and decision making (Paulus et al. 2018), and the need to complement it by corrective measures (Winkmann and Schweim 2000; Aixelá and Rovira-Esteva 2015; Liu et al. 2016). CiteScore was introduced in December 2016 by Elsevier as an alternative to the JIF, and thus serves as a direct competitor of the JIF, is more transparent than the JIF, and plays an increasing role in journal rating and evaluation because it can bridge some of JIF’s limitations, including the fact that it is freely accessible, uses a larger database [Scopus vs. Web of Science (WoS)] and applies a larger period of evaluation than the JIF (Courtney 2017; Teixeira da Silva and Memon 2017). What is common to both the JIF and CiteScore is that both JBMs rely on citations when evaluating scientific impact, prestige and quality, so they are citation-based metrics or citation impact indicators (Waltman 2016; Walters 2017), just like the h-index (Hirsch 2005), which is the most commonly used ABM, either in its original or its modified forms or derivatives (Alonso et al. 2009; Hammarfelt and Rushforth 2017), despite some flaws and limitations (Teixeira da Silva 2018). Since such metrics are used by funding bodies and tenure committees (Roldan-Valadez et al. 2019), corrections to adjust to accommodate for changes in the publishing landscape are needed.

A spike in the number of retractions (Grieneisen and Zhang 2012; Fanelli 2013; Steen et al. 2013; Kuroki and Ukawa 2018), in large part as a result of post-publication peer review, for example anonymous and named commentary on PubPeer leading to retractions (Coudert 2019), as well as an increase in retraction policies (Resnik et al. 2015a), has also cast light on the heightened risks of post-retraction citations, including inflated and undeserved citations (Teixeira da Silva and Bornemann-Cimenti 2017). The numbers themselves may appear insignificant, for example, 331 retracted papers from a pool of 1,114,476 papers published in the fields of chemistry, materials science, and chemical engineering in 2017 and 2018 (i.e., a rate of about 3 retractions per 10,000 publications, or 0.03%) (Coudert 2019). Among 16 open access mega journals, PLOS ONE displayed the highest rate of corrections (3.16%), but the level of retractions was much less (0.023%), the same as Scientific Reports (Erfanmanesh and Teixeira da Silva 2019). In some cases, where the literature has not been sufficiently corrected, possibly as a result of the variation in retraction policies even among leading publishers (Resnik et al. 2015b; Teixeira da Silva and Dobránszki 2017b), citations continue to be assigned to faulty or error-laden papers (Teixeira da Silva and Dobránszki 2018a), further accentuating the need for corrective factors or measures for JBMs and ABMs. This is because a number of highly cited papers continue to be cited, even though they have been retracted (Teixeira da Silva and Dobránszki 2017c). Kuroki and Ukawa (2018) noted that about 10% of retractions resulted from 1 to 2% of retracting authors that had retracted five or more papers. There is also a body of retractions based on unintentional error (Hosseini et al. 2018), further enhancing the notion that retractions are equated with a punitive, and thus stigmatization and a reticent desire to correct the literature as a result (Teixeira da Silva and Al-Khatib 2019). Finally, a whole host of new and experimental corrective measures, such as retract and replace, is complicating the publishing landscape (Teixeira da Silva 2017), including how to deal with citations to papers that are retracted and then republished, a topic that merits greater analysis.

Citation impact indicators, including citation-based JBMs and ABMs, may be distorted if the scientific literature and databases that they are based on contain citations to retracted papers. How does a journal accommodate, for example, retractions that are based on fake peer reviews (Qi et al. 2017)? Using false, skewed or distorted indicators in academic rating and evaluation may result in unfair rewards both for journals whose papers are cited after retraction (i.e., JBMs) and for academics whose retracted papers are cited undeservedly after retraction, i.e., ABMs (Teixeira da Silva et al. 2016; Bar-Ilan and Halevi 2017, 2018). High-profile journals with high JIFs have higher rates of retraction.1 Therefore, if retracted papers are cited, there is a demand not only for correcting the downstream literature (Teixeira da Silva 2015) but for correcting the academic metrics that cite them, to regain or reflect their true value.

In this paper, we propose and describe simple models by which citation-based JBMs and ABMs can be corrected by adjusting their equations to account for citations to retracted literature. We propose the use of our models to the two most widespread JBMs, the JIF and CiteScore, to the h-index, which is the most widely used ABM for evaluating the scientific achievement of a researcher, as well as to additional JBMs, such as WoS-based Eigenfactor Score (ES) and Article Influence Score (AIS) and Scopus-based Raw Impact per Paper (RIP). Moreover, we show the practical use of this correction using the JIF of two actual cases. We caution readers and others who may eventually apply these corrective measures, and/or derivatives or improvements thereof, not to use them as punitive measures or shaming tools but rather as academic tools for the pure correction of JBMs and ABMs so as to improve the fairness of the citation of the scientific literature that takes into account retractions and citations to retracted papers.

Proposals to correct citation-based journal- and author-based metrics

Correction of two journal-based metrics, JIF and CiteScore

In our recent paper using JIF as a model metric, we briefly described a prototype concept of how to restore academic metrics that may be distorted by unrewarded citations (Teixeira da Silva and Dobránszki 2018b). We introduced the corrected JIF (cJIF) and described its theoretical basis. The correction was based on the use of a corrective factor (c) which measures the ratio of number of citations to retracted papers to the number of published citable items, as follows:
$$ c = \frac{rc}{n} $$
where rc indicates the number of citations to retracted papers, while n indicates the number of total (citable) publications in a journal in the previous 2 years. The cJIF is calculated accordingly as cJIF = JIF(1 − c). In extreme cases, if rc equals or exceeds the number of citable published items (n), 1 − c ≤ 0, and this causes the value of cJIF to decrease to 0, but a negative value is almost never attained, except for extreme cases of retractions (Table 1). For practical purposes, we recommend that negative values (**in Table 1) be assigned a cJIF of 0 (equivalent to losing the JIF). Separate examples are also provided in Teixeira da Silva and Dobránszki (2018b).
Table 1

Hypothetical outcomes to the cJIF, a corrective measure to correct a journal’s JIF when that JIF is based on citations to literature that it has retracted

JIF rank

JIF

n

rc

c

cJIF

Realistic level*

Low

0.1

100

5

0.05

0.095

Realistic

0.1

100

10

0.1

0.09

Unrealistic

0.5

250

5

0.02

0.49

Realistic

0.5

250

15

0.06

0.47

0.5

250

25

0.1

0.45

0.5

250

50

0.2

0.4

0.5

250

100

0.4

0.3

Unrealistic

1

500

5

0.01

0.99

Realistic

1

500

10

0.02

0.98

1

500

50

0.1

0.9

1

500

100

0.2

0.8

1

500

250

0.5

0.5

1

500

500

1

0

Unrealistic

Medium

2.5

500

5

0.01

2.475

Realistic

2.5

500

10

0.02

2.45

2.5

500

50

0.1

2.25

2.5

500

100

0.2

2

2.5

500

250

0.5

1.25

2.5

500

500

1

0

Unrealistic

2.5

500

1000

2

− 2.5**

Highly unlikely

2.5

500

1250

2.5

− 3.75**

Highly unlikely

High

10

1000

5

0.005

9.95

Realistic

10

1000

25

0.025

9.75

10

1000

50

0.05

9.5

10

1000

250

0.25

7.5

10

1000

500

0.5

5

10

1000

1000

1

0

Unrealistic

10

1000

5000

5

− 40**

Highly unlikely

10

1000

10,000

10

− 90**

Highly unlikely

As can be appreciated, a few citations to retracted literature (a “realistic” scenario) would result in a minor adjustment of the JIF, whereas a large number of citations that approaches 50% or 100% of that journal’s citations (an “unrealistic” scenario) would result in a major adjustment of a journal’s JIF, or its complete elimination

JIF, Clarivate Analytics journal impact factor; n, number of total (citable) publications in a journal in the previous 2 years; rc, number of citations to retracted papers; c, rc/n; cJIF, corrected JIF that accounts for (reduces the value) invalid citations to retracted literature. * The level of “realism” could be crudely equated with scientifically sound or reproducible science. ** We recommend that cJIFs that attain a negative value be assigned a value of zero, for practical reasons (i.e., the journal loses its JIF)

In such a case, it could be argued that a journal could or should lose its JIF, even though we affirm throughout this paper that these corrective factors should not be used as punitive factors. This is because, as indicated in Table 1, it is not unreasonable to expect citations to a retracted paper(s) to total a certain fraction of total citations in two previous years, which we label as “realistic” in Table 1. Such values could be equated with bad, poor, non-reproducible or failed science being removed from the main body of reproducible science, which should be cited, leaving behind science that was not yet challenged, or challenged but remained valid or intact (i.e., cJIF). Consequently, a journal which has a large or excessive number of citations (which we refer to as “unrealistic” in Table 1) to retracted papers (i.e., rc) could be equated with a journal that is not fulfilling its academic or scholarly responsibilities, and is publishing bad, poor, erroneous or irreproducible science, and thus does not deserve to be cited. Even if, in an extreme hypothetical scenario, a journal were to be discovered in which most (≥ 50–80%) research findings were “false” (Ioannidis 2005; Colquhoun 2014), leading to a high number of retractions, the correction of the JIF would still remain valid since the JIF reflects the total number of citations and not the spread of papers that receive citations. It is unclear what the distribution of citations to retracted papers might follow, or even if their distribution may be skewed or follow distributions shown by regular citations, i.e., citations to unretracted literature (Blanford 2016),2,3 simply because the body of retracted literature is still small, but growing. This theory has yet to be tested for the retracted literature. We believe that it may thus be irrelevant if, for example, an rc value of 50 is derived from one highly cited (i.e., cited 50 times) retracted paper, or from 50 retracted papers that are cited only once, because ultimately the journal’s cJIF will remain the same.

JIF and CiteScore are calculated in a similar manner. A journal’s JIF for a given year is the quotient of the number of citations in a given year to the number of citable items (only articles and reviews) of the journal in the previous 2 years (Garfield 1972), as assessed from the Clarivate Analytics WoS database, i.e. “a ratio between citations and recent citable items published”.4 CiteScore, which is calculated from the Scopus (Elsevier) database, is the quotient of the number of citations to journal documents in a given year and the number of citable items (all documents) of the journals published in the previous 3 years (Kim and Chung 2018). Therefore the corrected CiteScore (cCiteScore) is:
$$ c{\text{CiteScore}} = {\text{CiteScore }}(1{-}c) $$
while interpreting the corrective factor to apply to the same years where citations were measured according to CiteScore’s definition, i.e. for the previous 3 years.5

Extending the definition of the corrective factor (c)

Using a similar form of calculation, as was described for the cJIF and cCiteScore, we suggest a corrective factor for correcting a series of JBMs based on citations. A universal corrective factor (cu) would thus be defined as:
$$ c_{u} = \frac{{rc_{i} }}{{n_{i} }} $$
where rci is the number of citations to retracted papers, and ni indicates the number of total citable publications in the journal during the period i, as considered by the given, examined metric. As for the JIF, in extreme cases, if rci equals or exceeds the number of citable published items (ni), and thus \( 1 - c_{\text{u}} \le \, 0 \), the value of the JBM should become 0, i.e., the journal loses its value.

The cu can be applied to some additional JBMs that are used in practice (some examples in Walters 2017) that are calculated similarly to the JIF or CiteScore and provided by WoS, such as the 5-year impact factor, immediacy index, cited half-life, and some additional JBMs provided by Scopus, such as RIP. In these cases, the corrected metric can be obtained by simple corrections, i.e. by multiplying the original indicator by (1 − cu).

We describe next our proposal for using cu to correct additional JBMs, such as ES and AIS, by using equations. “The Eigenfactor calculation is based on the number of times articles from the journal published in the past 5 years have been cited in the JCR year, but it also considers which journals have contributed these citations so that highly cited journals will influence the network more than lesser cited journals. References from one article in a journal to another article from the same journal are removed, so that Eigenfactors are not influenced by journal self-citation”.6

Calculation of AIS is based on the ES:
$$ {\text{AIS}} = \frac{{0.01 \times {\text{ES}}}}{X} $$
where X a quotient of the 5-year article number of the j journal and the 5-year article number of all journals.7
Calculations of both the ES and the AIS are based on, as the first step, the determination of a cross citation matrix of Zij:
$$ Z_{ij} = \frac{{{\text{cit}}_{{{\text{Year}}\left[ X \right]_{{}} }} }}{{n_{{{\text{Years}}\left[ {X - 1:X - 5} \right]_{{}} }} }} $$
where Zij indicates the citations from journal j in year X to documents published in journal i during years X − 1 to X − 5. After constructing that citation matrix, the ES can be obtained by additional steps that exclude self-citations and involve some normalizations.8
Therefore, we propose a correction for both scores (ES and AIS) during the first step of their calculations, i.e. the Zij value should be corrected. The corrected Zij (cZij) value should be calculated by using the cu corrective factor for the five calculated years, as follows:
$$ cZ_{ij} = \left( {1 - c_{u} } \right)\frac{{{\text{cit}}_{{{\text{Year}}X_{{}} }} }}{{n_{{{\text{Years}}\left[ {X - 1:X - 5} \right]_{{}} }} }} = \left( {1 - c_{u} } \right)Z_{ij} $$

Since the calculation of SJR from Scopus is very similar to the calculation of the ES,9 both can be corrected in a similar way, i.e., correction of the number of cites by a cu corrective factor, as described above.

Thereby a set of JBMs may be corrected by including the cu which enables excluding the potential distorting effect of citations of retracted literature.

In contrast to 1 − cu, which is an acute form of correction, we also propose a milder form of correction, and apply it to two 2015 retractions, a highly cited (928 citations on journal website) paper published in Wiley’s The Plant Journal (Voinnet et al. 2003), and a less cited paper published in Elsevier’s Experimental Cell Research (zero citations on journal website). Citation data for both papers for 2016 and 2017, were drawn from Clarivate Analytics’ Journal Citation Reports. Citations to these two retracted papers were compared (Table 2). That analysis reveals that when using the mild form of correction to discount the citations accredited to Voinnet et al. (2003), the 2018 JIF of The Plant Journal is reduced by 4.9%, or by 30% when a more acute form of the correction JIF(1 − c) [i.e., 5.726(1 − 0.294)] was applied. Similarly, the correction of the 2018 JIF of Experimental Cell Research accredited to citation to Yin et al. (2015) in 2016 (no citations to the paper in 2017) results in a reduction of 0.00286, or 0.09%. Therefore, highly cited papers alone strongly impact the weighting of a journal’s JIF, and thus merit a stronger correction to compensate for that illegitimate credit. Antonoyiannakis (2019) showed that in over 200 journals, the 2017 JIF was affected and/or influenced by citations to the most cited paper.
Table 2

A comparison of how the 2018 JIF of Wiley’s The Plant Journal and Elsevier’s Experimental Cell Research could be adjusted to compensate for citations in papers retracted from those journals in 2015 (Voinnet et al. 2003; Yin et al. 2015, respectively), using 2016 and 2017 Clarivate Analytics’ Journal Citation Reports data)

 

Voinnet et al. (2003)

(The Plant Journal)

Yin et al. (2015)

(Experimental Cell Research)

 

2016

2017

2016

2017

Number of published papers

226

346

287

412

Number of citations

1329

1946

1057

1270

Citations to retracted paper

72

96

2

0

Original 2018 JIF

5.726

 

3.329

 

Mildly corrected 2018 JIF

5.4321

 

3.3262

 

Acutely corrected 2018 JIF

4.0433

 

3.3194

 

1Mild correction (2018 JIF) (Voinnet et al. 2003): [(1329 − 72) + (1946 − 96)]/[226 + 346] = 3107/572 = 5.432

2Mild correction (2018 JIF) (Yin et al. 2015): [(1057 − 2) + (1270 − 0)]/[287 + 412] = 2325/699 = 3.326

3Acute correction (2018 JIF) (Voinnet et al. 2003): c = [72 + 96]/572 = 0.294; 1 − c = 0.706; cJIF = 5.726 × 0.706 = 4.043

4Acute correction (2018 JIF) (Yin et al. 2015): c = [2 + 0]/699 = 0.00286; 1 − c = 0.997; cJIF = 3.329 × 0.997 = 3.319

Correction of author-based metrics

The most commonly used ABM for formal academic evaluations and to measure the productivity and impact of a researcher is the h-index, despite its limitations (Hirsch 2005; Alonso et al. 2009; Costas and Bordons 2007; Bornmann and Daniel 2009) and practical risks of its use (Teixeira da Silva and Dobránszki 2018c, d).

The h-index is an indicator whose score indicates that an academic published h papers each of which has been cited at least h times (Hirsch 2005). Citations of retracted papers before or after the actual retraction may result in a biased or skewed h-index score and therefore undeserved rewards, salaries or grants for academics. In some cases, citations to retracted papers can be even higher than before the paper was retracted (Teixeira da Silva and Dobránszki 2017b), such as Fukuhara et al. (2005), which was cited 233 (Wos) or 282 (Scopus) times until retraction in 2007, but has since then been cited 887 (Wos) or 1072 (Scopus) times from 2008 until April 18, 2018.10 Therefore, the simplest way to hinder the distortion of the h-index is if citations of retracted papers are selected from valid ones—see possible exceptions in the “limitations” section below—and do not form the part of the evaluation (e.g., job interview), although they should be maintained, but clearly indicated, in a curriculum vitae (Teixeira da Silva and Tsigaris 2018). Invalid citations (either pre- or post-retraction) from all ABMs based on citation count as an indicator of scientific output should be eliminated, e.g., from the different derivatives of the h-index (Alonso et al. 2009).

A relatively new ABM, the Relative Citation Ratio (RCR), provides a score relative to NIH-funded research, giving relative weighting and serving as a measure of relative influence (Hutchins et al. 2016). Using the free online tool iCite,11 the RCR for a highly topical medical researcher, Paolo Macchiarini, was calculated as 335.26 for the period 1995–2019. Macchiarini currently (assessed on March 11, 2018) has seven retractions, two expressions of concern and two corrections.12 Using a simple user-controlled method to deselect the seven retracted publications, an adjusted RCR of 303.48 was obtained. Although we do not debate the pros and cons of the RCR, it has a few large limitations: (1) papers, including retractions, prior to 1995 cannot be assessed; (2) only PubMed-listed papers are considered, thus skewing the RCR towards medicine and/or PubMed-listed articles; (3) citations prior to or after a retraction has been issued cannot be assessed, and are calculated as a binary 0 or 1. The RCR may have some practical value for assessing funding in a highly competitive field such as biomedical research, and the ability to adjust for retractions thus becomes an important application.

Possible limitations

We wish to point out several possible weaknesses or limitations of our study and/or the potential future use of such corrective measures:
  1. (1)

    As indicated in the introduction, we urge that these corrective factors, either in the form that we have used, or any derivatives thereof, be used cautiously, and responsibly. By caution, we imply that much more testing and wider acceptance by a broad group of academics should first occur before such corrective metrics are applied to a large mass of journals or publishers. If the citation of invalid literature is one day accepted to be an ethical issue, corrective measures for JBMs and ABMs can be adopted by groups with branding potential at the global scale within the publishing workflow, such as COPE (Committee on Publication Ethics), the ICMJE (International Committee of Medical Journal Editors) or WAME (World Association of Medical Editors). By responsibility, we imply that such corrective measures should not be used by science watchdog groups or anti-science lobbyists to shame academics or the publishing-related infrastructure, but rather used as a purely corrective form to correct an already skewed and/or imperfect set of citation-based metrics already in wide use by global academia.

     
  2. (2)

    A very crucial issue is a debate that decides, preferably by the academic community rather than by ethics policy-makers or for-profit commercial publishers, which citations of the retracted literature constitute“valid” citations, and which citations are invalid. For example, in Teixeira da Silva and Dobránszki (2017b), we discuss a number of highly cited retracted papers in a number of academic disciplines. Among the total citations to those retracted papers, there are several which we believe are valid citations of those papers because they discuss, within a bibliometric context, the wider use of such retracted papers. Therefore, we propose that such bibliometric-based citations to retracted papers be considered as “valid” citations to the retracted literature when calculating different JBMs but never for calculating ABMs. In contrast, a citation to a study that has been proved to be methodologically flawed and was retracted as a result (e.g., Fukuhara et al. 2005), or that may have been retracted due to fraud and/or misconduct, should not be considered as a “valid” citation. Furthermore, we also consider that the citation of the retracted paper by the retraction notice should also be considered an “invalid” citation, and should not be used to calculate any JBM.13

     
  3. (3)

    In some cases, citations are awarded unfairly to papers that should have been retracted, but were not, such as duplicate or near-duplicate publications, in violation of formally stated retraction policies, offering an unfair advantage to the authors of the duplicate paper, and equally unfair advantage to the journal that published the duplicate paper, a term we coined as “citation inflation” (Teixeira da Silva and Dobránszki 2018a). The corrective factors that we propose in this paper are not be applicable to such cases, simply because the duplicate or near-duplicate paper has not been retracted, but they should be applicable.

     
  4. (4)

    It is plausible that provided there are weaknesses in the system related to corrections, including retractions (Wiedermann 2018), that publishers might not find an adjustment of JBMs and ABMs to be a priority.

     

Footnotes

Notes

Acknowledgements

Open access funding provided by University of Debrecen (DE). The authors thank Dr. Ludo R. Waltman (Centre for Science and Technology Studies, Leiden University, The Netherlands) for useful advice on an earlier version of this manuscript.

Compliance with ethical standards

Conflicts of interest

The authors declare no conflicts of interest.

References

  1. Aixelá, J. F., & Rovira-Esteva, S. (2015). Publishing and impact criteria, and their bearing on translation studies: In search of comparability. Perspectives: Studies in Translatology, 23(2), 265–283.  https://doi.org/10.1080/0907676x.2014.972419.CrossRefGoogle Scholar
  2. Alonso, S., Cabrerizo, F. J., Herrera-Viedma, E., & Herrera, F. (2009). h-index: A review focused in its variants, computation and standardization for different scientific fields. Journal of Informetrics, 3(4), 273–289.  https://doi.org/10.1016/j.joi.2009.04.001.CrossRefGoogle Scholar
  3. Antonoyiannakis, M. (2019). How a single paper affects the impact factor: Implications for scholarly publishing. arXiv https://arxiv.org/abs/1906.02660.
  4. Bar-Ilan, J., & Halevi, G. (2017). Post retraction citations in context: A case study. Scientometrics, 113(1), 547–565.  https://doi.org/10.1007/s11192-017-2242-0.CrossRefGoogle Scholar
  5. Bar-Ilan, J., & Halevi, G. (2018). Temporal characteristics of retracted articles. Scientometrics, 116(3), 1771–1783.  https://doi.org/10.1007/s11192-018-2802-y.CrossRefGoogle Scholar
  6. Blanford, C. F. (2016). Impact factors, citation distributions and journal stratification. Journal of Materials Science, 51, 10319.  https://doi.org/10.1007/s10853-016-0285-x.CrossRefGoogle Scholar
  7. Bornmann, L., & Daniel, H.-D. (2009). The state of h index research. Is the h index the ideal way to measure research performance? EMBO Reports, 10(1), 2–6.  https://doi.org/10.1038/embor.2008.233.CrossRefGoogle Scholar
  8. Colquhoun, D. (2014). An investigation of the false discovery rate and the misinterpretation of p-values. Royal Society Open Science, 1, 140216.  https://doi.org/10.1098/rsos.140216.CrossRefGoogle Scholar
  9. Costas, R., & Bordons, M. (2007). The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level. Journal of Informetrics, 1, 193–203.  https://doi.org/10.1016/j.joi.2007.02.001.CrossRefGoogle Scholar
  10. Coudert, F.-X. (2019). Correcting the scientific record: Retraction practices in chemistry and materials science. Chemistry of Materials, 31, 3593–3598.  https://doi.org/10.1021/acs.chemmater.9b00897.CrossRefGoogle Scholar
  11. Courtney, N. (2017). CiteScore vs. impact factor: How do we rate journal quality? https://library.osu.edu/researchcommons/2017/06/12/citescore-vs-impact-factor/ (last Accessed: July 1, 2019).
  12. Erfanmanesh, M., & Teixeira da Silva, J. A. (2019). Is the soundness-only quality control policy of open access mega journals linked to a higher rate of published errors? Scientometrics.  https://doi.org/10.1007/s11192-019-03153-5. (in press).Google Scholar
  13. Fanelli, D. (2013). Why growing retractions are (mostly) a good sign. PLOS Medicine, 10(12), e1001563.  https://doi.org/10.1371/journal.pmed.1001563.CrossRefGoogle Scholar
  14. Fukuhara, A., Matsuda, M., Nishizawa, M., Segawa, K., Tanaka, M., Kishimoto, K., et al. (2005). Visfatin: A protein secreted by visceral fat that mimics the effects of insulin. Science, 307(5708), 426–430.  https://doi.org/10.1126/science.1097243; retraction  https://doi.org/10.1126/science.318.5850.565b.
  15. Garfield, E. (1972). Citation analysis as a tool in journal evaluation. Science, 178, 471–479.  https://doi.org/10.1126/science.178.4060.471.CrossRefGoogle Scholar
  16. Grieneisen, M. L., & Zhang, M. (2012). A comprehensive survey of retracted articles from the scholarly literature. PLoS ONE, 7(10), 44118.  https://doi.org/10.1371/journal.pone.0044118.CrossRefGoogle Scholar
  17. Hammarfelt, B., & Rushforth, A. D. (2017). Indicators as judgment devices: An empirical study of citizen bibliometrics in research evaluation. Research Evaluation, 26(3), 169–180.  https://doi.org/10.1093/reseval/rvx018.CrossRefGoogle Scholar
  18. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences USA, 102(46), 16569–16572.  https://doi.org/10.1073/pnas.0507655102.CrossRefzbMATHGoogle Scholar
  19. Hosseini, M., Hilhorst, M., de Beaufort, I., & Fanelli, D. (2018). Doing the right thing: A qualitative investigation of retractions due to unintentional error. Science and Engineering Ethics, 24(1), 189–206.  https://doi.org/10.1007/s11948-017-9894-2.CrossRefGoogle Scholar
  20. Hutchins, B. I., Yuan, X., Anderson, J. M., & Santangelo, G. M. (2016). Relative Citation Ratio (RCR): A new metric that uses citation rates to measure influence at the article level. PLoS Biology, 14(9), e1002541.  https://doi.org/10.1371/journal.pbio.1002541.CrossRefGoogle Scholar
  21. Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124.  https://doi.org/10.1371/journal.pmed.0020124.CrossRefGoogle Scholar
  22. Kim, K., & Chung, Y. (2018). Overview of journal metrics. Science Editing, 5(1), 16–20.  https://doi.org/10.6087/kcse.112.CrossRefGoogle Scholar
  23. Kuroki, T., & Ukawa, A. (2018). Repeating probability of authors with retracted scientific publications. Accountability in Research, 25(4), 212–219.  https://doi.org/10.1080/08989621.2018.1449651.CrossRefGoogle Scholar
  24. Liu, X.-L., Gai, S.-S., & Zhou, J. (2016). Journal impact factor: Do the numerator and denominator need correction? PLoS ONE, 11(3), e0151414.  https://doi.org/10.1371/journal.pone.0151414.CrossRefGoogle Scholar
  25. Paulus, F. M., Cruz, N., & Krach, S. (2018). The impact factor fallacy. Frontiers in Psychology, 9, 1487.  https://doi.org/10.3389/fpsyg.2018.01487.CrossRefGoogle Scholar
  26. Qi, X., Deng, H., & Guo, X. (2017). Characteristics of retractions related to faked peer reviews: An overview. Postgraduate Medical Journal, 93(1102), 499–503.  https://doi.org/10.1136/postgradmedj-2016-133969.CrossRefGoogle Scholar
  27. Resnik, D. B., Rasmussen, L. M., & Kissling, G. E. (2015a). An international study of research misconduct policies. Accountability in Research, 22(5), 249–266.  https://doi.org/10.1080/08989621.2014.958218.CrossRefGoogle Scholar
  28. Resnik, D. B., Wager, E., & Kissling, G. E. (2015b). Retraction policies of top scientific journals ranked by impact factor. Journal of the Medical Library Association, 103(3), 136–139.  https://doi.org/10.3163/1536-5050.103.3.006.CrossRefGoogle Scholar
  29. Roldan-Valadez, E., Salazar-Ruiz, S. Y., Ibarra-Contreras, R., & Rios, R. (2019). Current concepts on bibliometrics: A brief review about impact factor, Eigenfactor score, CiteScore, SCImago Journal Rank, Source-Normalised Impact per Paper, H-index, and alternative metrics. Irish Journal of Medical Science, 188(3), 939–951.  https://doi.org/10.1007/s11845-018-1936-5.CrossRefGoogle Scholar
  30. Steen, R. G., Casadevall, A., & Fang, F. C. (2013). Why has the number of scientific retractions increased? PLoS ONE, 8(7), e68397.  https://doi.org/10.1371/journal.pone.0068397.CrossRefGoogle Scholar
  31. Teixeira da Silva, J. A. (2015). The importance of retractions and the need to correct the downstream literature. Journal of Scientific Exploration, 29(2), 353–356.Google Scholar
  32. Teixeira da Silva, J.A. (2017). Correction of the literature has evolved through manuscript versioning, error amendment, and retract and replace. Preprint.org https://www.preprints.org/manuscript/201708.0029/v1.
  33. Teixeira da Silva, J. A. (2018). The Google Scholar h-index: Useful but burdensome metric. Scientometrics, 117(1), 631–635.  https://doi.org/10.1007/s11192-018-2859-7.CrossRefGoogle Scholar
  34. Teixeira da Silva, J. A., & Al-Khatib, A. (2019). Ending the retraction stigma: Encouraging the reporting of errors in the biomedical record. Research Ethics.  https://doi.org/10.1177/1747016118802970. (in press).Google Scholar
  35. Teixeira da Silva, J. A., & Bernès, S. (2018). Clarivate Analytics: Continued omnia vanitas impact factor culture. Science and Engineering Ethics, 24(1), 291–297.  https://doi.org/10.1007/s11948-017-9873-7.CrossRefGoogle Scholar
  36. Teixeira da Silva, J. A., & Bornemann-Cimenti, H. (2017). Why do some retracted papers continue to be cited? Scientometrics, 110(1), 365–370.  https://doi.org/10.1007/s11192-016-2178-9.CrossRefGoogle Scholar
  37. Teixeira da Silva, J. A., & Dobránszki, J. (2017a). Compounding error: The afterlife of bad science. Academic Questions, 30(1), 65–72.  https://doi.org/10.1007/s12129-017-9621-0.CrossRefGoogle Scholar
  38. Teixeira da Silva, J. A., & Dobránszki, J. (2017b). Notices and policies for retractions, expressions of concern, errata and corrigenda: Their importance, content, and context. Science and Engineering Ethics, 23(2), 521–554.  https://doi.org/10.1007/s11948-016-9769-y.CrossRefGoogle Scholar
  39. Teixeira da Silva, J. A., & Dobránszki, J. (2017c). Highly cited retracted papers. Scientometrics, 110(3), 1653–1661.  https://doi.org/10.1007/s11192-016-2227-4.CrossRefGoogle Scholar
  40. Teixeira da Silva, J. A., & Dobránszki, J. (2018a). Citation inflation: The effect of not correcting the scientific literature sufficiently, a case study in the plant sciences. Scientometrics, 116(2), 1213–1222.  https://doi.org/10.1007/s11192-018-2759-x.CrossRefGoogle Scholar
  41. Teixeira da Silva, J. A., & Dobránszki, J. (2018b). Citing retracted papers affects education and librarianship, so distorted academic metrics need a correction. Journal of Librarianship and Scholarly Communication, 6, eP2199.  https://doi.org/10.7710/2162-3309.2258.CrossRefGoogle Scholar
  42. Teixeira da Silva, J. A., & Dobránszki, J. (2018c). Multiple versions of the h-index: Cautionary use for formal academic purposes. Scientometrics, 115(2), 1107–1113.  https://doi.org/10.1007/s11192-018-2680-3.CrossRefGoogle Scholar
  43. Teixeira da Silva, J. A., & Dobránszki, J. (2018d). Rejoinder to “Multiple versions of the h-index: Cautionary use for formal academic purposes”. Scientometrics, 115(2), 1131–1137.  https://doi.org/10.1007/s11192-018-2684-z.CrossRefGoogle Scholar
  44. Teixeira da Silva, J.A., Dobránszki, J., Bornemann-Cimenti, H. (2016). Citing retracted papers has a negative domino effect on science, education, and society. LSE Impact Blog. http://blogs.lse.ac.uk/impactofsocialsciences/2016/12/06/citing-retracted-papers-has-a-negative-domino-effect-on-science-education-and-society/ (last Accessed: March 11, 2019).
  45. Teixeira da Silva, J. A., & Memon, A. R. (2017). CiteScore: A cite for sore eyes, or a valuable, transparent metric? Scientometrics, 111(1), 553–556.  https://doi.org/10.1007/s11192-017-2250-0.CrossRefGoogle Scholar
  46. Teixeira da Silva, J. A., & Tsigaris, P. (2018). Academics must list all publications on their CV. KOME, 6(1), 94–99.  https://doi.org/10.17646/kome.2018.16.CrossRefGoogle Scholar
  47. Voinnet, O., Rivas, S., Mestre, P., & Baulcombe, D. (2003). An enhanced transient expression system in plants based on suppression of gene silencing by the p19 protein of tomato bushy stunt virus. The Plant Journal 33(5), 949–956.  https://doi.org/10.1046/j.1365-313x.2003.01676.x. Retraction (2015): The Plant Journal 84(4), 846–956.  https://doi.org/10.1111/tpj.13066.
  48. Walters, W. H. (2017). Do subjective journal ratings represent whole journals or typical articles? Unweighted or weighted citation impact? Journal of Informetrics, 11(3), 730–744.  https://doi.org/10.1016/j.joi.2017.05.001.CrossRefGoogle Scholar
  49. Waltman, L. (2016). A review of the literature on citation impact indicators. Journal of Informetrics, 10(2), 365–391.  https://doi.org/10.1016/j.joi.2016.02.007.CrossRefGoogle Scholar
  50. Wiedermann, C. J. (2018). Inaction over retractions of identified fraudulent publications: Ongoing weakness in the system of scientific self-correction. Accountability in Research, 25(4), 239–253.  https://doi.org/10.1080/08989621.2018.1450143.CrossRefGoogle Scholar
  51. Winkmann, G., & Schweim, H. G. (2000). Medizinisch-biowissenschaftliche Datenbanken und der Impact-Faktor. Deutsche Medizinische Wochenschrift, 125(38), 1133–1142.  https://doi.org/10.1055/s-2000-7581.CrossRefGoogle Scholar
  52. Yin, C-C., Ye, J-J., Zou, J., Lu, T., Du, Y-H., Liu, Z., Fan, R., Lu, F., Li, P., Ma, D-X., Ji, C-Y. (2015). Role of stromal cells-mediated Notch-1 in the invasion of T-ALL cells. Experimental Cell Research 332(1), 39–46.  https://doi.org/10.1016/j.yexcr.2015.01.008 (retracted in 2015).

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Judit Dobránszki
    • 1
  • Jaime A. Teixeira da Silva
    • 2
    Email author
  1. 1.Research Institute of Nyíregyháza, IAREFUniversity of DebrecenNyíregyházaHungary
  2. 2.Kagawa-KenJapan

Personalised recommendations