Advertisement

The Assessment Landscape in the United States: From Then to the Future

  • Eva L. BakerEmail author
  • Harold F. O’NeilJr.
Chapter
  • 54 Downloads

Abstract

What is the state of assessment in the United States? The answer, as always, depends upon test purposes, specifically who is being tested. The goal of this chapter is to substantially review this question principally for accountability purposes in the pre-collegiate (excluding early childhood) educational level as well as a brief discussion of workforce contexts. In this chapter, we will focus on external rather than teacher-made examinations, and where relevant, we reference pertinent federal policy changes governing or influencing widespread assessment practices. Because of space limitations, we emphasize pre-collegiate assessment. Part of our discussion addresses societal influences and, in particular, new technologies and their benefits. Based on our current work on assessments in technical domains, we will highlight important practices and close with summary thoughts about trends influencing assessment. We will use the terms test, assessment, and exam relatively interchangeably. Throughout the chapter as important subtext, we reflect on the level of evidence routinely provided about examination quality.

References

  1. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1985). Standards for educational and psychological testing. Washington, DC: Author.Google Scholar
  2. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: Author.Google Scholar
  3. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: Author.Google Scholar
  4. American Recovery and Reinvestment Act of 2009, Pub. L. No. 111-5.Google Scholar
  5. Baker, E. L. (2005). Aligning curriculum, standards, and assessments: Fulfilling the promise of school reform. In C. A. Dwyer (Ed.), Measurement and research in the accountability era (pp. 315–335). Mahwah: Erlbaum.Google Scholar
  6. Baker, E. L. (2012). Standards for educational and psychological testing. In J. A. Banks (Ed.), Encyclopedia of diversity in education (Vol. 4, pp. 2076–2081). Thousand Oaks: SAGE.Google Scholar
  7. Baker, E. L. (2015, May). Feature analysis: Improving the validity of competency tests. Presentation at the colloquium on assessment and accountability implications of competency and personalized learning systems. Boulder, CO.Google Scholar
  8. Baker, E., & Cai, L. (2014, August). CRESST analysis of the quality of a state assessment (Technical slide report). Los Angeles: CRESST.Google Scholar
  9. Baker, E. L., & Choi, K. C. (2018). Training Assessment Framework (Report to funder). Los Angeles: University of California, Los Angeles, National Center on Evaluation, Standards, and Student Testing (CRESST).Google Scholar
  10. Baker, E. L., Barton, P. E., Darling-Hammond, D., Haertel, E., Ladd, H. F., Linn, R. L., Ravitch, D., Rothstein, R., Shavelson, R. J., & Shepard, L. A. (2010, August). Problems with the use of student test scores to evaluate teachers (Briefing Paper #278). Washington, DC: Economic Policy Institute. Available at http://epi.3cdn.net/724cd9a1eb91c40ff0_hwm6iij90.pdf
  11. Baker, E. L., Chung, G., & Cai, L. (2016). Assessment gaze, refraction, and blur: The course of achievement testing in the last hundred years. Review of Research in Education (Centennial Issue), 40, 94–142.Google Scholar
  12. Bewley, W. L., Chung, G. K. W. K., Delacruz, G. C., & Baker, E. L. (2009). Assessment models and tools for virtual environment training. In D. Schmorrow, J. Cohn, & D. Nicholson (Eds.), The PSI handbook of virtual environments for training and education: Developments for the military and beyond (pp. 300–313). Westport: Praeger Security International.Google Scholar
  13. Burstein, J. (2003). The e-rater scoring engine: Automated essay scoring with natural language processing. In M. D. Shermis & J. C. Burstein (Eds.), Automated essay scoring: A cross disciplinary approach (pp. 113–121). Mahwah: Lawrence Erlbaum Associates.Google Scholar
  14. Coleman, J. S. (1966). Equality of educational opportunity. Oxford: U.S. Department of Health, Education.Google Scholar
  15. Elementary and Secondary Education Act of 1965 as amended, 20 U.S.C. §241 (1974).Google Scholar
  16. Every Student Succeeds Act of 2015, Pub. L. No. 114-95.Google Scholar
  17. Gareis, C. R. (2017). Assessment leadership: Leveraging performance-based assessments for deeper learning. Presented at the 21st Annual School-University Research Network Leadership Conference at the College of William & Mary, Williamsburg VA.Google Scholar
  18. Improving America’s Schools Act of 1994, Pub. L. No. 108, Stat. 3518.Google Scholar
  19. Jaschik, S. (2016, March). The shrinking humanities major. Inside Higher Ed. https://www.insidehighered.com/news/2016/03/14/study-shows-87-decline-humanities-bachelors-degrees-2-years
  20. Madni, A., Kao, J. C., Rivera, N. M., Baker, E. L., & Cai, L. (2018). Exploring career-readiness features in high school test items through cognitive laboratory interviews (CRESST Report 857). Los Angeles: University of California.Google Scholar
  21. National Association of Colleges and Employers (NACE). (2014). NACE 2013–2014 career services benchmark survey for colleges and universities. Bethlehem: Author.Google Scholar
  22. National Commission on Excellence in Education. (1983). Nation at risk. Washington, DC: Department of Education.Google Scholar
  23. National Council on Education Standards and Testing. (1992, January 24). Raising standards for American education: A report to Congress, the Secretary of Education, the National Education Goals Panel, and the American People. Washington, DC: Government Printing Office.Google Scholar
  24. National Education Goals Panel. (1999). The national education goals report: Building a nation of learners, 1999. Washington, DC: U.S. Government Printing Office.Google Scholar
  25. No Child Left Behind Act of 2001, Pub. L. No. 107-110, §115, Stat. 1425 (2002).Google Scholar
  26. O’Neil, H. F., & Chung, G. K. W. K. (2011, April). Use of knowledge mapping in computer-based assessment. In Paper presented at the annual meeting of the American Educational Research Association, New Orleans.Google Scholar
  27. O’Neil, H. F., Perez, R. S., & Baker, E. L. (Eds.). (2014). Teaching and measuring cognitive readiness. New York: Springer.Google Scholar
  28. Pellegrino, J. W., & Hilton, M. L. (Eds.). (2012). Education for life and work: Developing transferable knowledge and skills in the 21st century. Washington, DC: The National Academies Press.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.UCLA Graduate School of Education & Information StudiesLos AngelesUSA
  2. 2.National Center for Research on Evaluation, Standards, and Student Testing (CRESST)Los AngelesUSA
  3. 3.University of Southern CaliforniaLos AngelesUSA

Personalised recommendations