Encyclopedia of Clinical Neuropsychology

2018 Edition
| Editors: Jeffrey S. Kreutzer, John DeLuca, Bruce Caplan

Inter-rater Reliability

  • Rael T. LangeEmail author
Reference work entry
DOI: https://doi.org/10.1007/978-3-319-57111-9_1203

Synonyms

Concordance; Inter-observer reliability; Inter-rater agreement; Scorer reliability

Definition

The extent to which two or more raters (or observers, coders, examiners) agree. Inter-rater reliability addresses the consistency of the implementation of a rating system. Inter-rater reliability can be evaluated using a number of different statistics. Some of the more common statistics include percentage agreement, kappa, product-moment correlation, and intraclass correlation coefficient. High inter-rater reliability values refer to a high degree of agreement between two examiners. Low inter-rater reliability values refer to a low degree of agreement between two examiners. Examples of the use of inter-rater reliability in neuropsychology include (a) the evaluation of the consistency of clinician’s neuropsychological diagnoses, (b) the evaluation of scoring parameters on drawing tasks such as the Rey Complex Figure Test or Visual Reproduction Subtest, and (c) the evaluation of scoring...

This is a preview of subscription content, log in to check access.

Further Reading

  1. Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.). Upper Saddle River: Prentice Hall.Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Defense and Veterans Brain Injury CenterWalter Reed National Military Medical CenterBethesdaUSA