Advertisement

Introduction

  • Antonella Lanati
Chapter

Abstract

In recent years, the scientific world has been experiencing a revolution: the attention of the scientific and social community is not focused solely on the results but also on the related issues of reliability, safety and efficacy of the discoveries as well as the efficient and effective use of resources. A strong debate is now going on about the reproducibility of scientific results in light of the increasing number of retractions of published articles, some of them because of obvious fraud, and others simply due to the lack of controls or poor practice. Starting from a couple of surveys made by Big Pharma about the reliability of scientific research products, it was soon pointed out in scientific journals [1–4] and even in the press [5–8] that, too often, trying to reproduce results published in peer-reviewed journals fails, even when obtained from large, experienced laboratories. While some cases of fraud have been discovered and have led to the recall of the papers, in other cases the irreproducibility could be attributed to either improper data management and processing or—more broadly—lack of good experimentation management. A single example is reported in The Economist [5]: a 2010 study published in Science, the prestigious American journal, was retracted a year later because of the strong criticisms from other geneticists: they complained about the different statistical treatments applied to the samples taken from centenarians and those from the younger control group. The authors justified the withdrawal admitting technical errors and an inadequate quality-control protocol. The number of retractions is growing faster than ever, tenfold over the past decade; this, however, represents no more than 0.2% of the 1,400,000 papers published annually in scientific journals [5]. These figures cannot account for the lack of reproducibility reported by Big Pharma, leaving the impression that the phenomenon deserves more attention and further investigation. Peer review has long been claimed to be the best way to judge the worth of scientific papers and to guarantee the validity of the scientific results presented for publishing. Experiments have been made by journalists and even by editors to test the real effectiveness of peer reviewing (see Box  2.1 for some examples). Overall, it is suggested that only a minority of reviewers carry out an in-depth analysis of the scientific results to identify possible errors. In some cases, they even appear to miss due checks, either scientific or methodological. In any case, the peer review cannot be taken as the final, effective control for inaccuracies of the research project, although much has been done and is being done by the most important scientific journals to ensure the quality of what is published. Peer review has to be accompanied, indeed preceded, by a rigorous method for conducting the studies.

References

  1. 1.
    Collins FS, Tabak LA. Policy: NIH plans to enhance reproducibility. Nature. 2014;505:612–3.  https://doi.org/10.1038/505612a.CrossRefPubMedPubMedCentralGoogle Scholar
  2. 2.
    Freedman LP, et al. The economics of reproducibility in preclinical research. PLoS Biol. 2015;13(6):e1002165.  https://doi.org/10.1371/journal.pbio.1002165.CrossRefPubMedPubMedCentralGoogle Scholar
  3. 3.
    Jasny BR, et al. Fostering reproducibility in industry—academia research—sharing can pose challenges for collaborations. Science. 2017;357(6353):759–61.  https://doi.org/10.1126/science.aan4906.CrossRefPubMedGoogle Scholar
  4. 4.
    Ioannidis JPA. Why most published research findings are false. PLoS Med. 2015;2(8):e124.CrossRefGoogle Scholar
  5. 5.
    Trouble at the lab—scientists like to think of science as self-correcting. To an alarming degree, it is not. The Economist. 2013. http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble. Accessed 13 Sep 2017.
  6. 6.
    How Science goes wrong—scientific research has changed the world. Now it needs to change itself. The Economist. 2013. http://www.economist.com/news/leaders/21588069scientificresearchhaschangedworldnowitneedschangeitselfhowsciencegoeswrong. Accessed 13 Sep 2017.
  7. 7.
  8. 8.
    Naik G. Scientists’ elusive goal: reproducing study results. The Wall Street Journal 2011. https://www.wsj.com/articles/SB10001424052970203764804577059841672541590. Accessed 26 Sep 2017.
  9. 9.
    Eanes Z. NC State University group accused of falsifying research. dailytarheel.com. 2014. http://www.dailytarheel.com:8080/article/2014/02/nc-state-university-group-accused-of-falsifying-research. Accessed 13 Sep 2017.
  10. 10.
    Ségalat L. System crash. EMBO Rep. 2010;11(2):86–9.  https://doi.org/10.1038/embor.2009.278.CrossRefPubMedPubMedCentralGoogle Scholar
  11. 11.
    Menegon A, et al. A new electro-optical approach for conductance measurement: an assay for the study of drugs acting on ligand-gated ion channels. Sci Rep. 2017.  https://doi.org/10.1038/srep44843.

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Antonella Lanati
    • 1
  1. 1.Valore QualitàPaviaItaly

Personalised recommendations