Skip to main content

Interrater Reliability

  • Living reference work entry
  • First Online:
Encyclopedia of Personality and Individual Differences

Synonyms

Absolute agreement; Concordance; Observer agreement

Definition

Concordance, agreement, or reliability of ratings or observational data. Measures of interrater reliability and interrater agreement are estimates of the accuracy or the rater-independent invariance or of ratings or observational data.

Introduction

Rating methods are frequently used in personality psychology to provide a measure of a person’s trait or state characteristics. Rating data are estimates of persons’ (ratees’) characteristic values from an independent third perspective (rater, e.g., expert ratings, external judgments, clinical diagnostics). Regularly, it is assumed that rating data reflect ratees’ characteristic values independent of the rater who made the assessment. Hence, raters are assumed to be interchangeable as their individual perspective should be negligible. Measures of interrater agreement or interrater reliability can be used as psychometrical indicators of the suitability of this assumption.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  • Bonk, W. J., & Ockey, G. J. (2003). A many-facet Rasch analysis of second language group oral discussion task. Language Testing, 20(1), 89–110.

    Article  Google Scholar 

  • Brennan, R. L., & Prediger, D. J. (1981). Coefficient κ: some uses, misuses, and alternatives. Educational and Psychological Measurement, 41, 687–699.

    Article  Google Scholar 

  • Cook, R. J. (1998). Kappa and its dependence on marginal rates. In P. Armitage & T. Colton (Eds.), The encyclopedia of biostatistics (pp. 2166–2168). New York: Wiley.

    Google Scholar 

  • Fleiss, J. L. (1981). Statistical methods for rates and proportions (pp. 38–46). New York: John Wiley.

    Google Scholar 

  • Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: an overview and tutorial. Tutorials in Quantitative Methods for Psychology, 8(1), 23–34.

    Article  PubMed  PubMed Central  Google Scholar 

  • Hoyt, W. T. (2000). Rater bias in psychological research: when is it a problem and what can we do about it? Psychological Methods, 5, 64–86.

    Article  PubMed  Google Scholar 

  • McGraw, K. O., & Wong, S. P. (1996). Forming inferences about some intraclass correlation coefficients. Psychological Methods, 1, 30–46.

    Article  Google Scholar 

  • Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: uses in assessing rater reliability. Psychological Bulletin, 86(29), 420–428.

    Article  PubMed  Google Scholar 

  • Uebersax, J. S. (1987). Diversity of decision-making models and the measurement of interrater agreement. Psychological Bulletin, 101, 140–146.

    Article  Google Scholar 

  • Wirtz, M., & Caspar, F. (2002). Interrater agreement and interrater reliability. Göttingen: Hogrefe.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Markus Antonius Wirtz .

Editor information

Editors and Affiliations

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this entry

Cite this entry

Wirtz, M.A. (2017). Interrater Reliability. In: Zeigler-Hill, V., Shackelford, T. (eds) Encyclopedia of Personality and Individual Differences. Springer, Cham. https://doi.org/10.1007/978-3-319-28099-8_1317-1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-28099-8_1317-1

  • Received:

  • Accepted:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-28099-8

  • Online ISBN: 978-3-319-28099-8

  • eBook Packages: Springer Reference Behavioral Science and PsychologyReference Module Humanities and Social SciencesReference Module Business, Economics and Social Sciences

Publish with us

Policies and ethics