Assessing Review Reports of Scientific Articles: A Literature Review

  • Amanda Sizo
  • Adriano Lino
  • Álvaro Rocha
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 745)


Computational support has been applied in different stages for automation of the peer review process, such as reviewer assignment to the article, review of content of the scientific article, detection of plagiarism and bias, all applying Machine Learning (ML) techniques. However, there is a lack of studies which identify the instruments used to evaluate the reviewers’ reports. This systematic literature review aims to find evidence about which techniques have been applied in the assessment of the reviewers’ reports. Therefore, six online databases were evaluated, in which 55 articles were identified, all published since 2000, meeting the inclusion criteria of this review. The result shows 6 relevant studies, which address models of assessment of scientific article reviews. Nevertheless, the use of ML was not identified in any case. Therefore, our findings demonstrate that there are a few instruments used to assess the reviewers’ reports and furthermore, they cannot be reliably used to extensively automate the review process.


Systematic literature review Peer review Assessment Reviewers’ report 



We appreciate the financial support of AISTI (Iberian Association for Information Systems and Technologies), which permitted the registration in the WorldCIST’18 (6th World Conference on Information Systems and Technologies), held in Naples, Italy, 27–29 March 2018, and consequently this publication.


  1. 1.
    Chauvin, A., Moher, D., Altman, D., Schriger, D.L., Alam, S., Hopewell, S., Shanahan, D.R., Recchioni, A., Ravaud, P., Boutron, I.: A protocol of a cross-sectional study evaluating an online tool for early career peer reviewers assessing reports of randomised controlled trials. BMJ Open 7, 10 (2017)CrossRefGoogle Scholar
  2. 2.
    Neuhauser, D., Koran, C.J.: Calling Medical Care reviewers first: a randomized trial. Med Care. 27, 664–666 (1989)CrossRefGoogle Scholar
  3. 3.
    DeMaria, A.N.: What constitutes a great review? J. Am. Coll. Cardiol. 42, 1314–1315 (2003)CrossRefGoogle Scholar
  4. 4.
    Ward, P., Graber, K.C., van der Mars, H.: Writing quality peer reviews of research manuscripts. J. Teach. Phys. Educ. 34, 700–715 (2015)CrossRefGoogle Scholar
  5. 5.
    McGaghie, W.C., Bordage, G., Shea, J.A.: Problem statement, conceptual framework, and research question. Acad. Med. 76, 923–924 (2001)CrossRefGoogle Scholar
  6. 6.
    Jefferson, T., Wager, E., Davidoff, F.: Measuring the quality of editorial peer review. JAMA 287, 2786 (2002)CrossRefGoogle Scholar
  7. 7.
    Burley, R., Moylan, E.: What might peer review look like in 2030? (2017)Google Scholar
  8. 8.
    Price, S., Flach, P.A.: Computational support for academic peer review: a perspective from artificial intelligence. Commun. ACM 60, 70–79 (2017)CrossRefGoogle Scholar
  9. 9.
    Thompson, S.R., Agel, J., Losina, E.: The JBJS peer-review scoring scale: a valid, reliable instrument for measuring the quality of peer review reports. Learn. Publ. 29, 23–25 (2016)CrossRefGoogle Scholar
  10. 10.
    Landkroon, A.P., Euser, A.M., Veeken, H., Hart, W., Overbeke, A.J.P.M.: Quality assessment of reviewers’ reports using a simple instrument. Obstet. Gynecol. 108, 979–985 (2006)CrossRefGoogle Scholar
  11. 11.
    Callaham, M., McCulloch, C.: Longitudinal trends in the performance of scientific peer reviewers. Ann. Emerg. Med. 57, 141–148 (2011)CrossRefGoogle Scholar
  12. 12.
    Fortanet, I.: Evaluative language in peer review referee reports. J. Engl. Acad. Purp. 7, 27–37 (2008)CrossRefGoogle Scholar
  13. 13.
    Henly, S.J., Dougherty, M.C.: Quality of manuscript reviews in nursing research. Nurs. Outlook 57, 18–26 (2009)CrossRefGoogle Scholar
  14. 14.
    Priatna, W.S., Manalu, S.R., Sundjaja, A.M.: Development of review rating and reporting in open journal system. Procedia Comput. Sci. 116, 645–651 (2017)CrossRefGoogle Scholar
  15. 15.
    Van Rooyen, S., Black, N., Godlee, F.: Development of the review quality instrument (RQI) for assessing peer reviews of manuscripts. J. Clin. Epidemiol. 52, 625–629 (1999)CrossRefGoogle Scholar
  16. 16.
    Li, X., Watanabe, T.: Automatic paper-to-reviewer assignment, based on the matching degree of the reviewers. Procedia Comput. Sci. 22, 633–642 (2013)CrossRefGoogle Scholar
  17. 17.
    Marshall, I.J., Kuiper, J., Wallace, B.C.: RobotReviewer: evaluation of a system for automatically assessing bias in clinical trials. J. Am. Med. Inf. Assoc. 23, 193–201 (2016)CrossRefGoogle Scholar
  18. 18.
    Tennant, J.P., Dugan, J.M., Graziotin, D., Jacques, D.C., Waldner, F., Mietchen, D., Elkhatib, Y., B. Collister, L., Pikas, C.K., Crick, T., Masuzzo, P., Caravaggi, A., Berg, D.R., Niemeyer, K.E., Ross-Hellauer, T., Mannheimer, S., Rigling, L., Katz, D.S., Greshake Tzovaras, B., Pacheco-Mendoza, J., Fatima, N., Poblet, M., Isaakidis, M., Irawan, D.E., Renaut, S., Madan, C.R., Matthias, L., Nørgaard Kjær, J., O’Donnell, D.P., Neylon, C., Kearns, S., Selvaraju, M., Colomb, J.: A multi-disciplinary perspective on emergent and future innovations in peer review. F1000Research 6, 64 (2017)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Informatics Engineering, Center for Informatics and SystemsUniversity of CoimbraCoimbraPortugal

Personalised recommendations