Skip to main content

Interrater Reliability of the Peer Review Process in Management Journals

  • Chapter
  • First Online:

Abstract

Peer review is an established method of assessing the quality and contribution of academic performance in most scientific disciplines. Up to now, little is known about interrater agreement among reviewers in management journals. This paper aims to provide an overview of agreement among the judgments of reviewers in management studies. The results of our literature review indicate a low level of agreement among reviewers in management journals. However, low consensus is not specific to management studies but widely present in other sciences as well. We discuss the consequences and implications of low judgment agreement for management research.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    See Bornmann (2008, p. 26) and Cicchetti (1991, p. 129) for a list of literature on peer review research discussing different biases. See also Campanario (1998) who discusses fraud, favoritism, self-interest, the connections among authors, reviewers, and editors, as well as the suggestibility of particularistic criteria in the context of double-blind reviewing.

  2. 2.

    The author or Miller (2006, p. 429) do not report numerical results.

  3. 3.

    Full disagreement implies that one referee recommended acceptance and the other rejection.

References

  • Aguillo IF, Granadino B, Ortega JL, Prieto JA (2006) Scientific research activity and communication measured with cybermetrics indicators. J Am Soc Inf Sci Technol 57(10):1296–1302

    Article  Google Scholar 

  • Bartko JJ (1976) On various intraclass correlation reliability coefficients. Psychol Bull 83(5):762–765

    Article  Google Scholar 

  • Bedeian AG (2003) The manuscript review process: the proper roles of authors, referees, and editors. J Manag Inq 12(4):331–338

    Google Scholar 

  • Beyer JM, Roland GC, Fox WB (1995) The review process and the fates of manuscripts submitted to amj. Acad Manage J 38(5):1219–1260

    Article  Google Scholar 

  • Bornmann L (2008) Scientific peer review. An analysis of the peer review process from the perspective of sociology of science theories. Hum Archit 6(2):23–38

    Google Scholar 

  • Bornmann L (2011) Scientific peer review. Ann Rev Inf Sci Technol 45:199–245

    Google Scholar 

  • Bornmann L, Daniel H-D (2008) The effectiveness of the peer review process: inter-referee agreement and predictive validity of manuscript refereeing at angewandte chemie. Angew Chem Int Ed 47(38):7173–7178

    Article  Google Scholar 

  • Bornmann L, Mutz R, Daniel H-D (2010) A reliability-generalization study of journal peer reviews: a multilevel meta-analysis of inter-rater reliability and its determinants. PLoS ONE 5(12):e14331

    Article  Google Scholar 

  • Campanario JM (1998) Peer review for journals as it stands today-part 1. Sci Commun 19(3):181–211

    Article  Google Scholar 

  • Cicchetti DV (1980) Reliability of reviews for the American psychologist: a biostatistical assessment of the data. Am Psychol 35(3):300–303

    Article  Google Scholar 

  • Cicchetti DV (1991) The reliability of peer review for manuscript and grant submissions: a cross-disciplinary investigation. Behav Brain Sci 14(01):119–135

    Article  Google Scholar 

  • Cohen DJ (2007) The very separate worlds of academic and practitioner publications in human resource management: reasons for the divide and concrete solutions for bridging the gap. Acad Manag J 50:1013–1019

    Article  Google Scholar 

  • Cole S, Cole RJ, Simon AG (1981) Chance and consensus in peer review. Science 214(4523):881–886

    Article  Google Scholar 

  • Conger AJ, Ward DG (1984) Agreement among 2 × 2 agreement indices. Educ Psychol Meas 44(2):301–314

    Article  Google Scholar 

  • Cummings LL, Frost PJ, Vakil TF (1985) The manuscript review process: a view from inside on coaches, critics, and special cases. In: Cummings LL, Frost PJ (eds) Publishing in the organizational sciences. Irwin, Homeland, pp 469–508

    Google Scholar 

  • Daniel H-D (1993) Guardians of science: fairness and reliability of peer review. VCH, Weinheim

    Book  Google Scholar 

  • Fleiss JL, Cohen J (1973) The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educ Psychol Meas 33(3):613–619

    Article  Google Scholar 

  • Frey BS (2003) Publishing as prostitution? - choosing between one’s own ideas and academic success. Public Choice 116(1–2):205–223

    Article  Google Scholar 

  • Gans JS, Shepherd GB (1994) How are the mighty fallen: rejected classic articles by leading economists. J Econ Perspect 8(1):165–179

    Article  Google Scholar 

  • Goodman LA (1984) The analysis of cross-classified data having ordered categories. Harvard University Press, Cambridge

    Google Scholar 

  • Hargens LL, Herting JR (1990) A new approach to referees’ assessments of manuscripts. Soc Sci Res 19(1):1–16

    Article  Google Scholar 

  • Hendrick C (1976) Editorial comment. Pers Soc Psychol Bull 2:207–208

    Article  Google Scholar 

  • Hubbard R, Vetter DE, Littel EL (1998) Replication in strategic management: scientific testing for validity, generalizability, and usefulness. Strateg Manage J 19:243–254

    Article  Google Scholar 

  • Hunt JG, Blair JD (1987) Content, process, and the Matthew effect among management academics. J Air Waste Manage Assoc 13(2):191–210

    Google Scholar 

  • Ketchen D, Ireland RD (2010) From the editors upon further review: a survey of the academy of management journal’s editorial board. Acad Manage J 53(2):208–217

    Article  Google Scholar 

  • Kravitz RL, Franks P, Feldman MD, Gerrity M, Byrne C, Tierney WM (2010) Editorial peer reviewers’ recommendations at a general medical journal: are they reliable and do editors care? PLoS ONE 5(4):1–5

    Article  Google Scholar 

  • Kuhn T (1962) The structure of scientific revolutions, vol 2. University of Chicago Press, Chicago

    Google Scholar 

  • Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 33(1):159–174

    Article  Google Scholar 

  • Lazarus D (1982) Interreferee agreement and acceptance rates in physics. Behav Brain Sci 5(2):219–219

    Article  Google Scholar 

  • Lodahl JB, Gordon G (1972) The structure of scientific fields and the functioning of university graduate departments. Am Soc Rev 37(1):57–72

    Article  Google Scholar 

  • Marsh HW, Jayasinghe UW, Bond NW (2008) Improving the peer-review process for grant applications. Reliability, validity, bias, and generalizability. Am Psychol 63(3):160–168

    Article  Google Scholar 

  • Merton RK (1968) The Matthew effect in science. Science 159(3810):56–60

    Article  Google Scholar 

  • Miller CC (2006) From the editors: peer review in the organizational and management sciences: prevalence and effects of reviewer hostility, bias, and dissensus. Acad Manage J 49(3):425–431

    Article  Google Scholar 

  • Mutz R, Bornmann L, Daniel H-D (2012) Heterogeneity of inter-rater reliabilities of grant peer reviews and its determinants: a general estimating equations approach. PLoS ONE 7(10):1–10

    Article  Google Scholar 

  • Nicolai AT, Schulz A-C, Göbel M (2011) Between sweet harmony and a clash of cultures: does a joint academic–practitioner review reconcile rigor and relevance? J Appl Behav Sci 47(1):53–75

    Article  Google Scholar 

  • Pfeffer J (1993) Barriers to the advance of organizational science: paradigm development as a dependent variable. Acad Manage Rev 18(4):599–620

    Google Scholar 

  • Rowland F (2002) The peer-review process. Learned Publish 15(4):247–258

    Article  Google Scholar 

  • Schulz A-C, Nicolai A (2014) The intellectual link between management research and popularization media: a bibliometric analysis of the harvard business review. Acad Manage Learn Educ. forthcoming

    Google Scholar 

  • Shrout PE, Fleiss JL (1979) Intraclass correlations: uses in assessing rater reliability. Psychol Bull 86(2):420–428

    Article  Google Scholar 

  • Spitzer RL, Fleiss JL (1974) A re-analysis of the reliability of psychiatric diagnosis. Br J Psychiatry 125(587):341–347

    Article  Google Scholar 

  • Starbuck WH (2003) Turning lemons into lemonade: where is the value in peer reviews? J Manage Inq 12(4):344–351

    Google Scholar 

  • Starbuck WH (2005) How much better are the most-prestigious journals? The statistics of academic publication. Organ Sci 16(2):180–200

    Article  Google Scholar 

  • Tang M-C, Wang C-M, Chen K-H, Hsiang J (2012) Exploring alternative cyberbibliometrics for evaluation of scholarly performance in the social sciences and humanities in Taiwan. Proc Am Soc Inf Sci Technol 49(1):1–1

    Article  Google Scholar 

  • Thelwall M (2008) Bibliometrics to webometrics. J Inf Sci 34(4):605–621

    Article  Google Scholar 

  • Thelwall M, Haustein S, Larivière V, Sugimoto CR (2013) Do altmetrics work? Twitter and ten other social web services. PLoS ONE 8(5):e64841

    Article  Google Scholar 

  • Tinsley HE, Weiss DJ (1975) Interrater reliability and agreement of subjective judgments. J Couns Psychol 22(4):358–376

    Article  Google Scholar 

  • Watkins MW (1979) Chance and interrater agreement on manuscripts. Am Psychol 34(9):796–798

    Article  Google Scholar 

  • Weller AC (2001) Editorial peer review: its strengths and weaknesses, Asist monograph series. Hampton Press, New Jersey

    Google Scholar 

  • Weller K (2015) Social media and altmetrics: an overview of current alternative approaches to measuring scholarly impact. In: Welpe IM, Wollersheim J, Ringelhan S, Osterloh M (eds) Incentives and performance - governance of research organizations. Springer International Publishing AG, Cham

    Google Scholar 

  • Whitehurst GJ (1984) Interrater agreement for journal manuscript reviews. Am Psychol 39(1):22–28

    Article  Google Scholar 

  • Whitley R (1984) The development of management studies as a fragmented adhocracy. Soc Sci Inf 23(4–5):775–818

    Article  Google Scholar 

  • Zammuto RF (1984) Coping with disciplinary fragmentation. J Manage Educ 9(30):30–37

    Article  Google Scholar 

  • Zuckerman H, Merton RK (1971) Patterns of evaluation in science: institutionalisation, structure and functions of the referee system. Minerva 9(1):66–100

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander T. Nicolai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Nicolai, A.T., Schmal, S., Schuster, C.L. (2015). Interrater Reliability of the Peer Review Process in Management Journals. In: Welpe, I., Wollersheim, J., Ringelhan, S., Osterloh, M. (eds) Incentives and Performance. Springer, Cham. https://doi.org/10.1007/978-3-319-09785-5_7

Download citation

Publish with us

Policies and ethics