Keywords

1 Introduction

The foundation of many health care innovations is preclinical biomedical research, a stage of research that precedes testing in humans to assess feasibility and safety and which relies on the reproducibility of published discoveries to translate research findings into therapeutic applications.

However, researchers are facing challenges while attempting to use or validate the data generated through preclinical studies. Independent attempts to reproduce studies related to drug development have identified inconsistencies between published data and the validation studies. For example, in 2011, Bayer HealthCare was unable to validate the results of 43 out of 67 studies (Prinz et al. 2011), while Amgen reported its inability to validate 47 out of 53 seminal publications that claimed a new drug discovery in oncology (Begley and Ellis 2012).

Researchers attribute this inability to validate study results to issues of robustness and reproducibility. Although defined with some nuanced variation across research groups, reproducibility refers to achieving similar results when repeated under similar conditions, while robustness of a study ensures that similar results can be obtained from an experiment even when there are slight variations in test conditions or reagents (CURE Consortium 2017).

Several essential factors could account for a lack of reproducibility and robustness such as incomplete reporting of basic elements of experimental design, including blinding, randomization, replication, sample size calculation, and the effect of sex differences. Inadequate reporting may be due to poor training of the researchers to highlight and present technical details, insufficient reporting requirements, or page limitations imposed by the publications/journals. This results in the inability to replicate or further use study results since the necessary information to do so is lacking.

The limited presence of opportunities and platforms to contradict previously published work is also a contributing factor. Only a limited number of platforms allow researchers to publish scientific papers that point out any shortcomings of previously published work or highlight a negative impact of any of the components found during the study. Such data is equally essential and informative as any positive data/findings from a study, and limited availability of such data can result in irreproducibility.

Difficulty in accessing unpublished data is also a contributing factor. Negative or validation data are rarely welcomed by high-impact journals, and unpublished dark data related to published results (such as health records or performance on tasks which did not result in a significant finding) may comprise essential details that may help to reproduce the results of the study or build on its results.

For the past decade, stakeholders, such as researchers, journals, funders, and industry leaders, have been aggressively involved in identifying and taking steps to address the issue of reproducibility and robustness of preclinical research findings. These efforts include maintaining and, in some cases, strengthening scientific quality standards including examining and developing policies that guide research, increasing requirements for reagent and data sharing, and issuing new guidelines for publication.

One important step that stakeholders in the scientific community have taken is to support the development and implementation of guidelines. However, the realm of influence for a given type of stakeholder has been limited. For example, journals usually issue guidelines related to reporting of methods and data, whereas funders may issue guidelines pertaining primarily to study design and, increasingly, data management and availability. In addition, the enthusiasm with which stakeholders have tried to address the “reproducibility crisis” has led to the generation of a multitude of guidelines. This has resulted in a littered landscape where there is overlap without harmonization, gaps in recommendations or requirements that may enhance reproducibility, and slow updating of guidelines to meet the needs of promising, rapidly-evolving computational approaches. Worse yet, the perceived increased burden to meet requirements and lack of clarity around what guidelines to follow reduce compliance as it may leave researchers, publishers, and funding organizations confused and overwhelmed. The goal of this chapter is to compile and review the current state of existing guidelines to understand the overlaps, perform a gap analysis on what may still be missing, and to make recommendations for the future of guidelines to enhance reproducibility in preclinical research.

2 Guidelines and Resources Aimed at Improving Reproducibility and Robustness in Preclinical Data

2.1 Funders/Granting Agencies/Policy Makers

Many funders and policy makers have acknowledged the issue of irreproducibility and are developing new guidelines and initiatives to support the generation of data that are robust and reproducible. This section highlights guidelines, policies, and resources directly related to this issue in preclinical research by the major international granting institutions and is not intended to be an exhaustive review of all available guidelines, policies, and resources. Instead, the organizations reviewed represent a cross-section of many of the top funding organizations and publishers in granting volume and visibility. Included also is a network focused specifically on robustness, reproducibility, translatability, and reporting transparency of preclinical data with membership spanning across academia, industry, and publishing. Requirements pertaining to clinical research are included when guidance documents are also used for preclinical research. The funders, granting agencies, and policy makers surveyed included:

  • National Institutes of Health (NIH) (Collins and Tabak 2014; LI-COR 2018; Krester et al. 2017; NIH 2015, 2018a, b)

  • Medical Research Council (MRC) (Medical Research Council 2012a, b, 2016a, b, 2019a, b, c)

  • The World Health Organization (WHO) (World Health Organization 2006, 2010a, b, 2019)

  • Wellcome Trust (Wellcome Trust 2015, 2016a, b, 2018a, b, 2019a, b; The Academy of Medical Sciences 2015, 2016a, b; Universities UK 2012)

  • Canadian Institute of Health Research (CIHR) (Canadian Institutes of Health Research 2017a, b)

  • Deutsche Forschungsgemeinschaft (DFG)/German Research Foundation (Deutsche Forschungsgemeinschaft 2015, 2017a, b)

  • European Commission (EC) (European Commission 2018a, b; Orion Open Science 2019)

  • Institut National de la Santé et de la Recherche Médicale (INSERM) (French Institute of Health and Medical Research 2017; Brizzi and Dupre 2017)

  • US Department of Defense (DoD) (Department of Defense 2017a, b; National Institutes of Health Center for Information Technology 2019)

  • Cancer Research UK (CRUK) (Cancer Research UK 2018a, b, c)

  • National Health and Medical Research Council (NHMRC) (National Health and Medical Research Council 2018a, b, 2019; Boon and Leves 2015)

  • Center for Open Science (COS) (Open Science Foundation 2019a, b, c, d; Aalbersberg 2017)

  • Howard Hughes Medical Institute (HHMI) (ASAPbio 2018)

  • Bill & Melinda Gates Foundation (Gates Open Research 2019a, b, c, d, e)

  • Innovative Medicines Initiative (IMI) (Innovative Medicines Initiative 2017, 2018; Community Research and Development Information Service 2017; European Commission 2017)

  • Preclinical Data Forum Network (European College of Neuropsychopharmacology 2019a, b, c, d)

2.2 Publishers/Journal Groups

Journal publishers and groups have been revising author instructions and publication policies and guidelines, with an emphasis on detailed reporting of study design, replicates, statistical analyses, reagent identification, and validation. Such revisions are expected to encourage researchers to publish robust and reproducible data (National Institutes of Health 2017). Those publishers and groups considered in the analysis were:

  • NIH Publication Guidelines Endorsed by Journal Groups (Open Science Foundation 2019d)

  • Transparency and Openness Promotion (TOP) Guidelines for Journals (Open Science Foundation 2019d; Nature 2013)

  • Nature Journal (Nature 2017, 2019; Pattinson 2012)

  • PLOS ONE Journal (The Science Exchange Network 2019a, b; Fulmer 2012; Baker 2012; Powers 2019; PLOS ONE 2017a, b, 2019a, b, c; Bloom et al. 2014; Denker et al. 2017; Denker 2016)

  • Journal of Cell Biology (JCB) (Yamada and Hall 2015)

  • Elsevier (Cousijn and Fennell 2017; Elsevier 2018, 2019a, b, c, d, e, f, g; Scholarly Link eXchange 2019; Australian National Data Service 2018)

2.3 Summary of Overarching Themes

Guidelines implemented by funding bodies and publishers/journals to attain data reproducibility can take on many forms. Many agencies prefer to frame their guidelines as recommendations in order to accommodate scientific freedom, creativity, and innovation. Therefore, typical guidelines that support good research practices differ from principles set forth by good laboratory practices, which are based on a more formal framework and tend to be more prescriptive.

In reviewing current guidelines and initiatives around reproducibility and robustness, key areas that can lead to robust and reproducible research were revealed and are discussed below.

Research Design and Analysis

Providing a well-defined research framework and statistical plan before initiating the research reduces bias and thus helps to increase the robustness and reproducibility of the study.

Funders have under taken various initiatives to support robust research design and analysis, including developing guidance on granting applications. These require researchers to address a set of objectives in the grant proposal including the strengths and weakness of the research, details on the experimental design and methods of the study, planned statistical analyses, and sample sizes. In addition, researchers are often required to abide by existing reporting guidelines such as ARRIVE and asked to provide associated metadata.

Some funders, including NIH, DFG, NHMRC, and HHMI, have developed well-defined guidance documents focusing on robustness and reproducibility for applicants, while others, including Wellcome Trust and USDA, have started taking additional approaches to implement such guidelines. For instance, a symposium was held by Wellcome Trust, while USDA held an internal meeting to identify approaches and discuss solutions to include strong study designs and develop rigorous study plans.

As another example, a dedicated annexure, “Reproducibility and statistical design annex,” is required from the researchers in MRC-funded research projects to provide information on methodology and experimental design.

Apart from funders, journals are also working to improve study design quality and reporting, such as requiring that authors complete an editorial checklist before submitting their research in order to enhance the transparency of reporting and thus the reproducibility of published results. Nearly all journals, including Nature Journal of Cell Biology, and PLOS ONE and the major journal publisher Elsevier have introduced this requirement.

Some journals are also prototyping alternate review models such as early publication to help verify study design. For instance, in Elsevier’s Registered Reports initiative, the experimental methods and proposed analyses are preregistered and reviewed before study data is collected. The article gets published on the basis of its study protocol and thus prevents authors from modifying their experiments or excluding essential information on null or negative results in order to get their articles published. However, this has been implemented in a limited number of journals in the Elsevier portfolio. PLOS ONE permits researchers to submit their articles before a peer review process is conducted. This allows researchers/authors to seek feedback on draft manuscripts before or in parallel to formal review or submission to the journal.

Training and Support

Providing adequate training to researchers on the importance of robust study design and experimental methods can help to capture relevant information crucial to attaining reproducibility.

Funders such as MRC have deployed training programs to train both researchers and new panel members on the importance of experimental design and statistics and on the importance of having robust and reproducible research results.

In addition to a detailed guidance handbook for biomedical research, WHO has produced separate, comprehensive training manuals for both trainers and trainees to learn how to implement their guidelines. Also, of note, the Preclinical Data Forum Network, sponsored by the European College of Neuropsychopharmacology (European College of Neuropsychopharmacology 2019e) in Europe and Cohen Veterans Bioscience (Cohen Veterans Bioscience 2019) in the United States, organizes yearly training workshops to enhance awareness and to help junior scientists further develop their experimental skills, with prime focus on experimental design to generate high-quality, robust, reproducible, and relevant data.

Reagents and Reference Material

Developing standards for laboratory reagents are essential to maintain reproducibility.

Funders such as HHMI require researchers to make all tangible research materials including organisms, cell lines, plasmids, or similar materials integral to a publication through a repository or by sending them out directly to requestors.

Laboratory Protocols

Providing detailed laboratory protocols is required to reproduce a study. Otherwise, researchers may introduce process variability when attempting to reproduce the protocol in their own laboratories. These protocols can also be used by reviewers and editors during the peer review process or by researchers to compare methodological details between laboratories pursuing similar approaches.

Funders such as INSERM took the initiative to introduce an electronic lab book. This platform provides better research services by digitizing the experimental work. This enables researchers to better trace and track the data and procedures used in experiments.

Journals such as PLOS ONE have taken an initiative wherein authors can deposit their laboratory protocols on repositories such as protocols.io. A unique digital object identifier (DOI) is assigned to each study and linked to the Methods section of the original article, allowing researchers to access the published work of these authors along with the detailed protocols used to obtain the results.

Reporting and Review

Providing open and transparent access to the research findings and study methods and publishing null or negative results associated with a study facilitate data reproducibility.

Funders require authors to report, cite, and store study data in its entirety, and have developed various initiatives to facilitate data sharing. For instance, CIHR and NHMRC have implemented an open access policy, which requires researchers to store their data in specific repositories to improve discovery and facilitate interaction among researchers, gain Creative Commons Attribution license (CC BY) for their research to allow other researchers to access and use the data in parts or as a whole, and link their research activities via identifiers such as digital object identifiers (DOIs) and ORCID to allow appropriate citation of datasets and provide recognition to data generators and sharers.

Wellcome Trust and Bill & Melinda Gates Foundation have launched their own publishing platforms – Wellcome Open Research and Gates Open Research, respectively – to allow researchers to publish and share their results rapidly.

Other efforts focused on data include the European Commission, which aims to build an open research platform “European Open Science Cloud” that can act as a virtual repository of research data of publicly funded studies and allow European researchers to store, process, and access research data.

In addition, the Preclinical Data Forum Network has been working toward building a data exchange and information repository and incentivizing the publication of negative data by issuing the world’s first price for published “negative” scientific results.

Journals have also taken various initiatives to allow open access of their publications. Some journals such as Nature and PLOS ONE require data availability statements to be submitted by researchers to help in locating the data, and accessing details for primary large-scale data, through details of repositories and digital object identifiers or accession numbers.

Journals also advise authors to upload their raw and metadata in appropriate repositories. Some journals have created their separate cloud-based repository, in addition to those publicly available. For instance, Elsevier has created Mendeley Data to help researchers manage, share, and showcase their research data. And, JCB has established JCB DataViewer, a cross-platform repository for storing large amounts of raw imaging and gel data, for its published manuscripts. Elsevier has also partnered with platforms such as Scholix and FORCE11, which allows data citation, encouraging reuse of research data, and enabling reproducibility of published research.

3 Gaps and Looking to the Future

A gap analysis of existing guidelines and resources was performed, addressing such critical factors as study design, transparency, data management, availability of resources and information, linking relevant research, publication opportunities, consideration of refutations, and initiatives to grow. It should be noted that these categories were not defined de novo but based on a comprehensive review of the high-impact organizations considered.

We considered the following observed factors within each category to understand where organizations are supporting good research practices with explicit guidelines and/or initiatives and to identify potential gaps:

  • Study Design

    • Scientific premise of proposed research: Guidelines to support current or proposed research that is formed on a strong foundation of prior work.

    • Robust methodology to address hypothesis: Guidelines to design robust studies that address the scientific question. This includes justification and reporting of the experimental technique, statistical analysis, and animal model.

    • Animal use guidelines and legal permissions: Guidelines regarding animal use, clinical trial reportings, or legal permissions.

    • Validation of materials: Guidelines to ensure validity of experimental protocol, reagent, or equipment.

  • Transparency

    • Comprehensive description of methodology: Guidelines to ensure comprehensive reporting of method and analysis to ensure reproducibility by other researchers. For example, publishers may include additional space for researchers to detail their methodology. Similar to “robust methodology to address hypothesis” but more focused on post-collection reporting rather than initial design.

    • Appropriate acknowledgments: Guidelines for authors to appropriately acknowledge contributors, such as co-authors or references.

    • Reporting of positive and negative data: Guidelines to promote release of negative data, which reinforces unbiased reporting.

  • Data Management

    • Early design of data management: Guidelines to promote early design of data management.

    • Storage and preservation of data: Guidelines to ensure safe and long-term storage and preservation of data.

    • Additional tools for data collection and management: Miscellaneous data management tools developed (e.g., electronic lab notebook).

  • Availability of Resources and Information

    • Data availability statements: A statement committing researchers to sharing data (usually upon submission to a journal or funding organization).

    • Access to raw or structured data: Guidelines to share data in publicly available or institutional repositories to allow for outside researchers to reanalyze or reuse data.

    • Open or public access publications: Guidelines to encourage open or public access publications, which allows for unrestricted use of research.

    • Shared access to resources, reagents, and protocols: Guidelines to encourage shared access to resources, reagents, and protocols. This may include requirements for researchers to independently share and ship resources or nonprofit reagent repositories.

  • Linking Relevant Research

    • Indexing data, reagents, and protocols: Guidelines to index research components, such as data, reagent, or protocols. Indexing using a digital object identifier (DOI) allows researchers to digitally track use of research components.

    • Two-way linking of relevant datasets and publications: Guidelines to encourage linkage between publications. This is particularly important in clinical research when multiple datasets are compiled to increase analytical power.

  • Publication Opportunities

    • Effective review: Guidelines to expedite or strengthen the review process, such as a checklist for authors or reviewers to complete or additional responsibilities of the reviewer.

    • Additional peer review and public release processes: Opportunities to release research conclusions independent from the typical journal process.

    • Preregistration: Guidelines to encourage preregistration, a process where researchers commit to their study design prior to collecting data. This reduces bias and increases clarity of the results.

  • Consideration of Refutations

    • Attempts to resolve failures to reproduce: Guidelines for authors and organizations to address any discrepancies in results or conclusions

  • Initiatives to Grow

    • Develop resources: Additional resources developed to increase reproducibility and rigor in research. This includes training workshops.

    • Work to develop responsible standards: Commitments and overarching goals made by organization to increase reproducibility and rigor in research.

As part of the study design, it appeared that there is a dearth of guidelines to ensure validity of experimental protocols, reagents, or equipment. Variability and incomplete reporting of reagents used is a known and oft-cited source of irreproducibility.

The most notable omission regarding transparency were guidelines to promote the release of report negative data to reinforce unbiased reporting. This also results in poor study reproducibility since, overwhelmingly, only positive data are reported for preclinical studies.

Most funding agencies have seriously begun initiatives addressing data management to ensure safe and long-term storage and preservation of data and are developing, making available, or promoting data management tools (e.g., electronic lab notebook). However, these ongoing activities do not often include guidelines to promote the early design of data management, which may reduce errors and ease researcher burden by optimizing and streamlining the process from study design to data upload.

To that point, a massive shift can be seen as both funders and publishers intensely engage in guidelines around the availability of resources and information. Most of this effort is in the ongoing development of guidelines to share data in publicly available or institutional repositories to allow for outside researchers to reanalyze or reuse data. This is to create a long-term framework for new strategies to research that will allow for “big data” computational modeling, deep-learning artificial intelligence, and mega-analyses across species and measures. However, not many guidelines were found that encourage shared access to resources, reagents, and protocols. This may include requirements for researchers to independently share and ship resources or nonprofit reagent repositories.

Related are guidelines for linking relevant research. This includes guidelines to index research components, such as data, reagents, or protocols with digital object identifiers (DOIs) that allow researchers to digitally track the use of research components and guidelines to encourage two-way linking of relevant datasets and publications. This is historically a common requirement for clinical studies and is currently being developed for preclinical research, but not consistently across the organizations surveyed.

On the reporting side, the most notable exclusion to publication opportunities guidelines were those that encourage preregistration, a process whereby researchers commit to their study design prior to collecting data and publishers agree to publish results whether they be positive or negative. These would serve to reduce both experimental and publication biases and increase clarity of the results.

In the category consideration of refutations, which, broadly, are attempts to resolve failures to reproduce a study, few guidelines exist. However, there is ongoing work to develop guidelines for authors and organizations to address discrepancies in results or conclusions and a commitment from publishers that they will consider publications that do not confirm previously published research in their journal.

Lastly, although many organizations cite a number of initiatives to grow, there appear to be notable gaps both in the development of additional resources and work to develop responsible standards. One initiative that aims to develop solutions to address the issue of data reproducibility in preclinical neuroscience research is the EQIPD (European Quality in Preclinical Data) project, launched in October 2017 with support from the Innovative Medicines Initiative (IMI). The project recognizes poor data quality as the main concern resulting in the non-replication of studies/experiments and aims to look for simple, sustainable solutions to improve data quality without impacting innovation. It is expected that this initiative will lead to a cultural change in data quality approaches in the medical research and drug development field with the final intent to establish guidelines that will strengthen robustness, rigor, and validity of research data to enable a smoother and safer transition from preclinical to clinical testing and drug approval in neuroscience (National Institutes of Health 2017; Nature 2013, 2017; Vollert et al. 2018).

In terms of providing additional resources, although some organizations emphasize training and workshops for researchers to enhance rigor and reproducibility, it is unclear if and how organizations themselves assess the effectiveness and actual implementation of their guidelines and policies. An exception may be WHO’s training program, which provides manuals for both trainer and trainee to support the implementation of their guidelines.

More must also be done to accelerate work to develop consensus, responsible standards. As funders, publishers, and preclinical researchers alike begin recognizing the promise of computational approaches and attempt to meet the demands for these kinds of analyses, equal resources and energy must be devoted to the required underlying standards and tools. To be able to harmonize data across labs and species, ontologies and CDEs must be developed and researchers must be trained and incentivized to use them. Not only may data that have already been generated offer profound validation opportunities but also the ability to follow novel lines of research agnostically based on an unbiased foundation of data. In acquiring new data, guidelines urging preclinical scientists to collect and upload all experimental factors, including associated dark data in a usable format may bring the field closer to understanding if predictive multivariate signatures exist, embrace deviations in study design, and may be more reflective of clinical trials.

Overall, the best path forward may be for influential organizations to develop a comprehensive plan to enhance reproducibility and align on a standard set of policies. A coherent road map or strategy would ensure that all known factors related to this issue are addressed and reduce complications for investigators.