Fair Testing and the Role of Accessibility

  • Elizabeth A. StoneEmail author
  • Linda L. Cook


Psychometricians who agree that fairness is a desirable goal in testing may disagree regarding whether scores from a particular testing program provide the basis for fair inferences about test takers. Most psychometricians do agree that fairness is a fundamental validity issue that should be addressed from the very conception of a new test or testing process. One commonly adopted position is that fair interpretations of test results are based on scores that have comparable meaning for all individuals in the intended population and that fair test score interpretations do not cause an advantage or disadvantage due to characteristics of individual test takers that are irrelevant to the construct the test is intended to measure. An important concept associated with fairness in testing is the concept of an accessible assessment. We approach from a practical standpoint (a) how to create accessible assessments with a focus on the design and development of the construct, content, format, response mode, and score reports, (b) how assistive technology can be used to increase accessibility and fairness for some groups of test takers, (c) what happens if assessments continue to present barriers to some groups of test takers in spite of efforts to make them accessible, and (d) the need for test accommodations and modifications including how to form policies for accommodations. Finally, we provide suggestions for how to evaluate the fairness and accessibility of an assessment.


Accommodations Disabilities English learners Modifications Score comparability Test development Test security Testing policy Universal design Validity 


  1. Abedi, J., & Sato, E. (2007). Linguistic modification. A report prepared for the US Department of Education LEP Partnership. Washington, DC: US Department of Education. Retrieved from
  2. Albus, D., & Thurlow, M. L. (2013). Accommodation policies for states’ alternate assessments based on alternate achievement standards (AA-AAS) (Synthesis Report 90). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.Google Scholar
  3. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.Google Scholar
  4. CAST (2011). Universal Design for Learning Guidelines version 2.0. Wakefield, MA: Author.Google Scholar
  5. Christensen, L. L., Albus, D. A., Liu, K. K., Thurlow, M. L., & Kincaid, A. (2013). Accommodations for students with disabilities on state English language proficiency assessments: A review of 2011 state policies. Minneapolis, MN: University of Minnesota, Improving the Validity of Assessment Results for English Language Learners with Disabilities (IVARED).Google Scholar
  6. Davis, L. L., Kong, X., McBride, Y., & Morrison, K. M. (2015). Device comparability of tablets and computers for assessment purposes. Retrieved from
  7. DePascale, C., Dadey, N., & Lyons, S. (2016). Score comparability across computerized delivery devices. Retrieved from
  8. Hakkinen, M. (2015). Assistive technologies for computer-based assessments. R&D Connections, 24, 1–9.Google Scholar
  9. Hambleton, R. K., & Patsula, L. (1999). Increasing the validity of adapted tests: Myths to be avoided and guidelines for improving test adaptation practices. Journal of Applied Testing Technology, 1(1), 1–13.Google Scholar
  10. Hancock, G. R., Mueller, R. O., & Stapleton, L. M. (2010). The reviewer’s guide to quantitative methods in the social sciences. New York, NY: Taylor and Francis.Google Scholar
  11. Individuals with Disabilities Education Act of 2004, 20 U.S.C. § 1400.Google Scholar
  12. Laitusis, C. C., & Cook, L. L. (2008). Reading aloud as an accommodation for a test of reading comprehension. Research Spotlight, 15. Retrieved from
  13. Lazarus, S. S., Kincaid, A., Thurlow, M. L., Rieke, R. L., & Dominguez, L. M. (2014). 2013 state policies for selected response accommodations on statewide assessments (Synthesis Report 93). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.Google Scholar
  14. Partnership for Assessment of Readiness for College and Careers. (2016). PARCC accessibility features and accommodations manual 2016–2017 (5th ed.). Washington, DC: PARCC Assessment Consortium.Google Scholar
  15. Peers, D., Spencer-Cavaliere, N., & Eales, L. (2014). Say what you mean: Rethinking disability language in Adapted Physical Activity Quarterly. Adapted Physical Activity Quarterly, 31(3), 265–282.CrossRefPubMedGoogle Scholar
  16. Scarborough, H. S., & Parker, J. D. (2003). Matthew effects in children with learning disabilities: Development of reading, IQ, and psychosocial problems from grade 2 to grade 8. Annals of Dyslexia, 53(1), 47–71.CrossRefGoogle Scholar
  17. Sireci, S. G., Scarpati, S. E., & Li, S. (2005). Test accommodations for students with disabilities: An analysis of the interaction hypothesis. Review of Educational Research, 75(4), 457–490.Google Scholar
  18. Sireci, S. G., & Pitoniak, M. J. (2007). Assessment accommodations: What have we learned from research. In C. C. Laitusis & L. L. Cook (Eds.), Large-scale assessments and accommodations: What works (pp. 53–65). Arlington, VA: Council for Exceptional Children.Google Scholar
  19. Smarter Balanced Assessment Consortium. (2013). Smarter Balanced Assessment Consortium: Usability, accessibility, and accommodations guidelines. Retrieved from
  20. Snow, K. (2007). People first language. Disability is natural.
  21. Solano-Flores, G., & Trumbull, E. (2003). Examining language in context: The need for new research and practice paradigms in the testing of English-language learners. Educational Researcher, 32(2), 3–13.CrossRefGoogle Scholar
  22. Steedle, J., McBride, M., Johnson, M., & Keng, L. (2015). Spring 2015 device comparability study. Retrieved from
  23. Stone, E. (2016, April). Integrating digital assessment meta-data for psychometric and validity analysis. Paper presented at the annual meeting of the National Council on Measurement in Education, Washington, D.C.Google Scholar
  24. Stone, E., King, T. C., & Laitusis, C. C. (2011). Examining the comparability of paper-based and computer-based administrations of novel item types: Verbal text completion and Quantitative numeric entry items. ETS Research Memorandum RM 11–03.Google Scholar
  25. Thompson, S. J., Johnstone, C. J., & Thurlow, M. L. (2002). Universal design applied to large scale assessments (Synthesis Report 44). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from Pubs/Synthesis14.html
  26. Thurlow, M., Laitusis, C. C., Dillon, D. R., Cook, L. L., Moen, R. E., Abedi, J., & O’Brien, D. G. (2009). Accessibility principles for reading assessments. Minneapolis, MN: National Accessible Reading Assessments Projects.Google Scholar
  27. Thurlow, M. L., & Larson, J. (2011). Accommodations for state reading assessments: Policies across the nation. Minneapolis, MN: University of Minnesota, Partnership for Accessible Reading Assessment.Google Scholar
  28. Tindal, G., Heath, B., Hollenbeck, K., Almond, P., & Harniss, M. (1998). Accommodating students with disabilities on large-scale tests: An experimental study. Exceptional Children, 64(4), 439–450.CrossRefGoogle Scholar
  29. Young, J. W. (2009). A framework for test validity research on content assessments taken by English language learners. Educational Assessment, 14(3–4), 122–138.CrossRefGoogle Scholar
  30. Young, J. W., & King, T. C. (2008). Testing accommodations for English language learners: A review of state and district policies. ETS Research Report Series, 2008(2), i–13.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Educational Testing ServicesPrincetonUSA

Personalised recommendations