Abstract
This chapter presents our reflections on the validation practices in the social, behavioral and health sciences. Based on the 15 syntheses included in this volume we make both narrow and broad recommendations. First, we discuss our observations about construct validation and relations with other variables as validity evidence. Second, we observed that validation studies are not guided by any theoretical orientation or validity perspectives and that validation practices have a feel of being opportunistic or somewhat haphazard. The Standards and other descriptions of current validity theories appear to lack practical guidance. Our recommendation is that validation studies need to have an explicit “validation plan” and the plan needs to be guided by some conceptual or theoretical orientation. We close this chapter with some reflections on what are consistently under-represented in validation practices: a concern for response processes and consequences. A framework that guides validation practice highlighting consequences is described.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This remark about haphazard validation practices excludes the studies of professional testing organizations, testing and assessment divisions in government agencies, and test publishers. In the last 10 years we have observed a marked increase and interest in systematic validation plans at testing agencies and institutions.
References
American Educational Research Association (AERA), American Psychological Association (APA), & National Council on Measurement in Education (NCME). (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
Castillo-Díaz, M., & Padilla, J.-L. (2013). How cognitive interviewing can provide validity evidence of the response processes to scale items. Social Indicators Research, 114, 963–975.
Chapelle, C. A., Enright, M. K., & Jamieson, J. (2010). Does an argument-based approach to validity make a difference? Educational Measurement: Issues and Practice, 29(1), 3–13.
Cobo, E., Cortés, J., Ribera, J. M., Cardellach, F., Selva-O’Callaghan, A., Kostov, B., et al. (2011). Effect of using reporting guidelines during peer review on quality of final manuscripts submitted to a biomedical journal: Masked randomized trial. British Medical Journal, 343, d6783.
Embretson, S. E. (1983). Construct validity: Construct representation versus nomothetic span. Psychological Bulletin, 93(1), 179–197.
Embretson, S. E. (2007). Construct validity: A universal validity system or just another test evaluation procedure? Educational Researcher, 36(8), 449–455.
Forer, B., & Zumbo, B. D. (2011). Validation of multilevel constructs: Validation methods and empirical findings for the EDI. Social Indicators Research, 103(2), 231–265.
Gadermann, A. M., Guhn, M., & Zumbo, B. D. (2011). Investigating the substantive aspect of construct validity for the satisfaction with life scale adapted for children: A focus on cognitive processes. Social Indicators Research, 100, 37–60.
Hubley, A. M., & Zumbo, B. D. (2011). Validity and the consequences of test interpretation and use. Social Indicators Research, 103, 219–230.
Hubley, A. M., & Zumbo, B. D. (2013). Psychometric characteristics of assessment procedures: An overview. In K. F. Geisinger (Ed.), APA handbook of testing and assessment in psychology (Vol. 1, pp. 3–19). Washington, DC: American Psychological Association Press.
Kane, M. T. (2004). Certification testing as an illustration of argument-based validation. Measurement: Interdisciplinary Research and Perspective, 2(3), 135–170.
Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 17–64). Westport: American Council on Education/Praeger.
Kane, M. T. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50, 1–73.
Lissitz, R. W., & Samuelsen, K. (2007). A suggested change in terminology and emphasis regarding validity and education. Educational Researcher, 36(8), 437–448.
Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13–103). New York: American Council on Education and Macmillan.
Moss, P. A. (2007). Reconstructing validity. Educational Researcher, 36(8), 470–476.
Shepard, L. A. (1993). Evaluating test validity. Review of Research in Education, 19(1), 405–450.
Zumbo, B. D. (2007). Validity: Foundational issues and statistical methodology. In C. R. Rao & S. Sinharay (Eds.), Psychometrics (Handbook of statistics, Vol. 26, pp. 45–79). Amsterdam/Boston: Elsevier Science B.V.
Zumbo, B. D. (2009). Validity as contextualized and pragmatic explanation, and its implications for validation practice. In R. W. Lissitz (Ed.), The concept of validity: Revisions, new directions and applications (pp. 65–82). Charlotte: Information Age Publishing.
Zumbo, B. D., & Forer, B. (2011). Testing and measurement from a multilevel view: Psychometrics and validation. In J. A. Bovaird & K. F. Geisinger (Eds.), High stakes testing in education: Science and practice in K-12 settings (pp. 177–190). Washington, DC: American Psychological Association.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Zumbo, B.D., Chan, E.K.H. (2014). Reflections on Validation Practices in the Social, Behavioral, and Health Sciences. In: Zumbo, B., Chan, E. (eds) Validity and Validation in Social, Behavioral, and Health Sciences. Social Indicators Research Series, vol 54. Springer, Cham. https://doi.org/10.1007/978-3-319-07794-9_19
Download citation
DOI: https://doi.org/10.1007/978-3-319-07794-9_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-07793-2
Online ISBN: 978-3-319-07794-9
eBook Packages: Humanities, Social Sciences and LawSocial Sciences (R0)