Abstract
Peer review is an established method of assessing the quality and contribution of academic performance in most scientific disciplines. Up to now, little is known about interrater agreement among reviewers in management journals. This paper aims to provide an overview of agreement among the judgments of reviewers in management studies. The results of our literature review indicate a low level of agreement among reviewers in management journals. However, low consensus is not specific to management studies but widely present in other sciences as well. We discuss the consequences and implications of low judgment agreement for management research.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
See Bornmann (2008, p. 26) and Cicchetti (1991, p. 129) for a list of literature on peer review research discussing different biases. See also Campanario (1998) who discusses fraud, favoritism, self-interest, the connections among authors, reviewers, and editors, as well as the suggestibility of particularistic criteria in the context of double-blind reviewing.
- 2.
The author or Miller (2006, p. 429) do not report numerical results.
- 3.
Full disagreement implies that one referee recommended acceptance and the other rejection.
References
Aguillo IF, Granadino B, Ortega JL, Prieto JA (2006) Scientific research activity and communication measured with cybermetrics indicators. J Am Soc Inf Sci Technol 57(10):1296–1302
Bartko JJ (1976) On various intraclass correlation reliability coefficients. Psychol Bull 83(5):762–765
Bedeian AG (2003) The manuscript review process: the proper roles of authors, referees, and editors. J Manag Inq 12(4):331–338
Beyer JM, Roland GC, Fox WB (1995) The review process and the fates of manuscripts submitted to amj. Acad Manage J 38(5):1219–1260
Bornmann L (2008) Scientific peer review. An analysis of the peer review process from the perspective of sociology of science theories. Hum Archit 6(2):23–38
Bornmann L (2011) Scientific peer review. Ann Rev Inf Sci Technol 45:199–245
Bornmann L, Daniel H-D (2008) The effectiveness of the peer review process: inter-referee agreement and predictive validity of manuscript refereeing at angewandte chemie. Angew Chem Int Ed 47(38):7173–7178
Bornmann L, Mutz R, Daniel H-D (2010) A reliability-generalization study of journal peer reviews: a multilevel meta-analysis of inter-rater reliability and its determinants. PLoS ONE 5(12):e14331
Campanario JM (1998) Peer review for journals as it stands today-part 1. Sci Commun 19(3):181–211
Cicchetti DV (1980) Reliability of reviews for the American psychologist: a biostatistical assessment of the data. Am Psychol 35(3):300–303
Cicchetti DV (1991) The reliability of peer review for manuscript and grant submissions: a cross-disciplinary investigation. Behav Brain Sci 14(01):119–135
Cohen DJ (2007) The very separate worlds of academic and practitioner publications in human resource management: reasons for the divide and concrete solutions for bridging the gap. Acad Manag J 50:1013–1019
Cole S, Cole RJ, Simon AG (1981) Chance and consensus in peer review. Science 214(4523):881–886
Conger AJ, Ward DG (1984) Agreement among 2 × 2 agreement indices. Educ Psychol Meas 44(2):301–314
Cummings LL, Frost PJ, Vakil TF (1985) The manuscript review process: a view from inside on coaches, critics, and special cases. In: Cummings LL, Frost PJ (eds) Publishing in the organizational sciences. Irwin, Homeland, pp 469–508
Daniel H-D (1993) Guardians of science: fairness and reliability of peer review. VCH, Weinheim
Fleiss JL, Cohen J (1973) The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educ Psychol Meas 33(3):613–619
Frey BS (2003) Publishing as prostitution? - choosing between one’s own ideas and academic success. Public Choice 116(1–2):205–223
Gans JS, Shepherd GB (1994) How are the mighty fallen: rejected classic articles by leading economists. J Econ Perspect 8(1):165–179
Goodman LA (1984) The analysis of cross-classified data having ordered categories. Harvard University Press, Cambridge
Hargens LL, Herting JR (1990) A new approach to referees’ assessments of manuscripts. Soc Sci Res 19(1):1–16
Hendrick C (1976) Editorial comment. Pers Soc Psychol Bull 2:207–208
Hubbard R, Vetter DE, Littel EL (1998) Replication in strategic management: scientific testing for validity, generalizability, and usefulness. Strateg Manage J 19:243–254
Hunt JG, Blair JD (1987) Content, process, and the Matthew effect among management academics. J Air Waste Manage Assoc 13(2):191–210
Ketchen D, Ireland RD (2010) From the editors upon further review: a survey of the academy of management journal’s editorial board. Acad Manage J 53(2):208–217
Kravitz RL, Franks P, Feldman MD, Gerrity M, Byrne C, Tierney WM (2010) Editorial peer reviewers’ recommendations at a general medical journal: are they reliable and do editors care? PLoS ONE 5(4):1–5
Kuhn T (1962) The structure of scientific revolutions, vol 2. University of Chicago Press, Chicago
Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 33(1):159–174
Lazarus D (1982) Interreferee agreement and acceptance rates in physics. Behav Brain Sci 5(2):219–219
Lodahl JB, Gordon G (1972) The structure of scientific fields and the functioning of university graduate departments. Am Soc Rev 37(1):57–72
Marsh HW, Jayasinghe UW, Bond NW (2008) Improving the peer-review process for grant applications. Reliability, validity, bias, and generalizability. Am Psychol 63(3):160–168
Merton RK (1968) The Matthew effect in science. Science 159(3810):56–60
Miller CC (2006) From the editors: peer review in the organizational and management sciences: prevalence and effects of reviewer hostility, bias, and dissensus. Acad Manage J 49(3):425–431
Mutz R, Bornmann L, Daniel H-D (2012) Heterogeneity of inter-rater reliabilities of grant peer reviews and its determinants: a general estimating equations approach. PLoS ONE 7(10):1–10
Nicolai AT, Schulz A-C, Göbel M (2011) Between sweet harmony and a clash of cultures: does a joint academic–practitioner review reconcile rigor and relevance? J Appl Behav Sci 47(1):53–75
Pfeffer J (1993) Barriers to the advance of organizational science: paradigm development as a dependent variable. Acad Manage Rev 18(4):599–620
Rowland F (2002) The peer-review process. Learned Publish 15(4):247–258
Schulz A-C, Nicolai A (2014) The intellectual link between management research and popularization media: a bibliometric analysis of the harvard business review. Acad Manage Learn Educ. forthcoming
Shrout PE, Fleiss JL (1979) Intraclass correlations: uses in assessing rater reliability. Psychol Bull 86(2):420–428
Spitzer RL, Fleiss JL (1974) A re-analysis of the reliability of psychiatric diagnosis. Br J Psychiatry 125(587):341–347
Starbuck WH (2003) Turning lemons into lemonade: where is the value in peer reviews? J Manage Inq 12(4):344–351
Starbuck WH (2005) How much better are the most-prestigious journals? The statistics of academic publication. Organ Sci 16(2):180–200
Tang M-C, Wang C-M, Chen K-H, Hsiang J (2012) Exploring alternative cyberbibliometrics for evaluation of scholarly performance in the social sciences and humanities in Taiwan. Proc Am Soc Inf Sci Technol 49(1):1–1
Thelwall M (2008) Bibliometrics to webometrics. J Inf Sci 34(4):605–621
Thelwall M, Haustein S, Larivière V, Sugimoto CR (2013) Do altmetrics work? Twitter and ten other social web services. PLoS ONE 8(5):e64841
Tinsley HE, Weiss DJ (1975) Interrater reliability and agreement of subjective judgments. J Couns Psychol 22(4):358–376
Watkins MW (1979) Chance and interrater agreement on manuscripts. Am Psychol 34(9):796–798
Weller AC (2001) Editorial peer review: its strengths and weaknesses, Asist monograph series. Hampton Press, New Jersey
Weller K (2015) Social media and altmetrics: an overview of current alternative approaches to measuring scholarly impact. In: Welpe IM, Wollersheim J, Ringelhan S, Osterloh M (eds) Incentives and performance - governance of research organizations. Springer International Publishing AG, Cham
Whitehurst GJ (1984) Interrater agreement for journal manuscript reviews. Am Psychol 39(1):22–28
Whitley R (1984) The development of management studies as a fragmented adhocracy. Soc Sci Inf 23(4–5):775–818
Zammuto RF (1984) Coping with disciplinary fragmentation. J Manage Educ 9(30):30–37
Zuckerman H, Merton RK (1971) Patterns of evaluation in science: institutionalisation, structure and functions of the referee system. Minerva 9(1):66–100
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Nicolai, A.T., Schmal, S., Schuster, C.L. (2015). Interrater Reliability of the Peer Review Process in Management Journals. In: Welpe, I., Wollersheim, J., Ringelhan, S., Osterloh, M. (eds) Incentives and Performance. Springer, Cham. https://doi.org/10.1007/978-3-319-09785-5_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-09785-5_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-09784-8
Online ISBN: 978-3-319-09785-5
eBook Packages: Business and EconomicsBusiness and Management (R0)