Advertisement

Trials

, 20:630 | Cite as

Comments on “Reporting quality of randomized controlled trial abstracts in the seven highest-ranking anesthesiology journals”

  • Rohan Kumar OchaniEmail author
  • Asim Shaikh
  • Naser Yamani
Open Access
Letter

Abstract

Randomized controlled trials are considered the gold standard in assessing treatment regimens, and since abstracts may be the only part of a paper that a physician reads, accurate reporting of data in abstracts is essential. The CONSORT checklist for abstracts was designed to standardize data reporting; however, for papers submitted to anesthesiology journals, the level of adherence to the CONSORT checklist for abstracts is unknown. Therefore, we commend Janackovic and Puljak for their efforts in determining the adherence of reports of trials in the highest-impact anesthesiology journals between 2014 and 2016. The results of their study are extremely important; however, we believe that that study had some methodological limitations, which we discuss in this manuscript.

Keywords

Reporting quality CONSORT for abstracts randomized controlled trials 

Dear Editor,

The importance of adhering to the Consolidated Standards of Reporting Trials (CONSORT) checklist when reporting randomized controlled trials cannot be overstated, as the results of a trial can strongly influence clinical practice [1], especially for abstracts, since busy clinicians often rely solely on the abstract. Hence, we commend Janackovic and Puljak for their efforts in determining the adherence of reports of trials to the CONSORT checklist for abstracts in the highest impact anesthesiology journals between 2014 and 2016 [2]. The results of their study are extremely important; however, we believe that that study has some methodological limitations.

Firstly, that study calculates an overall total adherence score for all trials. All items in the checklist were scored as “yes,” “no,” or “unclear.” Hence, that study clearly assigns an equal weight to each item on the CONSORT checklist. We believe that giving each item an equal value and scoring them identically is not the best approach, as evidently some items should carry much more importance, such as randomization, blinding, and reporting of the primary outcome compared to giving the contact details of the authors [3]. Furthermore, the total adherence score is heavily influenced by a very few items that have extreme results. In that study, “interventions,” “objective,” “outcome,” and “conclusions” all had scores of over 90% and in contrast, “source of funding” had a score of only 0.2%. We suspect that these values had a profound impact on the total adherence score.

Secondly, the study also states “two authors independently screened bibliographic results.” An inter-rater reliability test, such as Cohen’s kappa, would have been of great benefit here. Multiple individuals collecting similar types of data often come to different conclusions. Moreover, variables that are subject to inter-rater errors are common throughout the clinical literature [4]. Therefore, while resolving discrepancies via discussion may have produced a consensus, conducting an inter-rater reliability test would have identified discrepancies and which variables were susceptible to errors. That study does not indicate the level of agreement achieved for these crucial differences.

Finally, the study compares the total adherence scores obtained for each journal and states which had the highest and lowest scores. Note that journals can have very different reporting criteria and policies for certain items [5]. Some journals insist that certain items are reported in the full text as opposed to the abstract, and vice versa. Moreover, there can be discrepancies between an abstract and its corresponding full text [6]. Therefore, comparing journals based on their total adherence scores may be misguided. Perhaps, comparing individual checklist items between journals, especially important items such as allocation concealment, would be more effective at highlighting significant inadequacies concerning adherence to CONSORT.

Notes

Acknowledgements

Not applicable.

Authors’ contributions

RKO was responsible for conceiving this study, preparing most of the draft of the work, giving final approval, and the accuracy of the work. AS was responsible for conceiving this study, preparing most of the draft of work, giving final approval, and the accuracy of the work. NY helped in the design of the study and was responsible for drafting the work, giving final approval, and the accuracy of the work.

Funding

No source of funding.

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

References

  1. 1.
    Falci SG, Marques LS. CONSORT: when and how to use it. Dental Press J Orthod. 2015;20(3):13–5.CrossRefGoogle Scholar
  2. 2.
    Janackovic K, Puljak L. Reporting quality of randomized controlled trial abstracts in the seven highest-ranking anesthesiology journals. Trials. 2018;19(1):591.CrossRefGoogle Scholar
  3. 3.
    Bridgman S, Engebretsen L, Dainty K, Kirkley A, Maffulli N. ISAKOS Scientific Committee. Practical aspects of randomization and blinding in randomized clinical trials. Arthroscopy. 2003;19(9):1000–6.CrossRefGoogle Scholar
  4. 4.
    McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb). 2012;22(3):276–82.CrossRefGoogle Scholar
  5. 5.
    Shawwa K, Kallas R, Koujanian S, et al. Requirements of clinical journals for authors’ disclosure of financial and non-financial conflicts of interest: a cross sectional study. PLoS One. 2016;11(3):e0152301.CrossRefGoogle Scholar
  6. 6.
    Li G, LPF A, Nwosu I, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.CrossRefGoogle Scholar

Copyright information

© The Author(s). 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Authors and Affiliations

  1. 1.Department of Internal MedicineDow University of Health SciencesKarachiPakistan
  2. 2.Department of Internal MedicineRush University Medical CenterChicagoUSA

Personalised recommendations