Concordance; Inter-observer reliability; Inter-rater agreement; Scorer reliability
The extent to which two or more raters (or observers, coders, examiners) agree. Inter-rater reliability addresses the consistency of the implementation of a rating system. Inter-rater reliability can be evaluated using a number of different statistics. Some of the more common statistics include percentage agreement, kappa, product-moment correlation, and intraclass correlation coefficient. High inter-rater reliability values refer to a high degree of agreement between two examiners. Low inter-rater reliability values refer to a low degree of agreement between two examiners. Examples of the use of inter-rater reliability in neuropsychology include (a) the evaluation of the consistency of clinician’s neuropsychological diagnoses, (b) the evaluation of scoring parameters on drawing tasks such as the Rey Complex Figure Test or Visual Reproduction Subtest, and (c) the evaluation of scoring...
- Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.). Upper Saddle River: Prentice Hall.Google Scholar