Abstract
Interrater agreement is usually used as an aid in determining the reliability of measurements based on human coding (from text or from observations). Reliability is usually seen as a requirement for validity, does the data present what they are proposed to present. This chapter contains background information on what is meant by these two terms reliability and validity.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
G. Andrén, Reliability and content analysis, in Advances in Content Analysis, ed. by K.E. Rosengren (Sage, Beverly Hills, 1981), pp. 43–67
J. Bara, Israel 1949–1981, in Ideology, Strategy and Party Change: Spatial Analyses of Post-war Election Programmes in 19 Democracies, ed. by I. Budge, D. Robertson, D.J. Hearl (Cambridge University Press, Cambridge, 1987), pp. 111–133
D.T. Campbell, D.W. Fiske, Convergent and discriminant validation by the multitrait–multimethod matrix. Psychol. Bull. 56(1), 81–105 (1959)
K. Carley, An approach for relating social structure to cognitive structure. J. Math. Sociol. 12(2), 137–189 (1986)
K. Carley, Formalizing the expert’s knowledge. Sociol. Methods Res. 17(2), 165–232 (1988)
J.D. Cone, The relevance of reliability and validity for behavioral assessment. Behav. Ther. 8(3), 411–427 (1977)
J. Herbert, C. Attridge, A guide for developers and users of observation systems and manuals. Am. Educ. Res. J. 12(1), 1–20 (1975)
A.R. Hollenbeck, Problems of reliability in observational research, in Observing Behavior, vol. 2, ed. by G.P. Sacker (University Park Press, London, 1978), pp. 79–98
O.R. Holsti, Content Analysis for the Social Sciences and Humanities (Addison Wesley, London, 1969)
R.H. Kolbe, M.S. Burnett, Content-analysis research: An examination of applications with directives for improving research reliability and objectivity. J. Consum. Res. 18(2), 243–250 (1991)
K. Krippendorff, Content Analysis: An Introduction to Its Methodology (Sage, Beverly Hills, CA, 1980)
K. Krippendorff, Association, agreement, and equity. Qual. Quant. 21(1), 109–123 (1987)
S. Lacy, D. Riffe, Sampling error and selecting intercoder reliability samples for nominal content categories. Journal. Mass Commun. Q. 73(4), 963–973 (1996)
M. Lombard, J. Snyder-Duch, C.C. Bracken, Content analysis in mass communication: assessment and reporting of intercoder reliability. Hum. Commun. Res. 28(4), 587–604 (2002)
R. Popping, On agreement indices for nominal data, in Sociometric Research, vol. I, ed. by W.E. Saris, I.N. Gallhofer (McMillan, London, 1988), pp. 90–105
W.J. Potter, D. Levine-Donnerstein, Rethinking validity and reliability in content analysis. J. Appl. Commun. Res. 27(3), 258–284 (1999)
D. Riffe, A.A. Freitag, A content analysis of content analyses: twenty-five years of journalism quarterly. Journal. Mass Commun. Q. 74(4), 873–882 (1997)
J. Spanjer, B. Krol, R. Popping, J.W. Groothoff, S. Brouwer, Disability assessment interview: the role of concrete and detailed information on functioning besides medical history taking. J. Rehabil. Med. 41(4), 267–272 (2009)
J.S. Uebersax, W.M. Grove, Latent class analysis of diagnostic agreement. Stat. Med. 9(5), 559–572 (1990)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Popping, R. (2019). Reliability and Validity. In: Introduction to Interrater Agreement for Nominal Data. Springer, Cham. https://doi.org/10.1007/978-3-030-11671-2_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-11671-2_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-11670-5
Online ISBN: 978-3-030-11671-2
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)