Advertisement

Comparing and assessing the consequences of two different approaches to measuring school effectiveness

  • Cassandra M. GuarinoEmail author
  • Brian W. Stacy
  • Jeffrey M. Wooldridge
Article
  • 26 Downloads

Abstract

Nations, states, and districts must choose among an array of different approaches to measuring school effectiveness in implementing their accountability policies, and the choice can be difficult because different approaches yield different results. This study compares two approaches to computing school effectiveness: a “beating the odds” type approach and a “value-added” approach. We analyze the approaches using both administrative data and simulated data and reveal the reasons why they produce different results. We find that differences are driven by a combination of factors related to modeling decisions as well as bias stemming from nonrandom assignment. Generally, we find that the value-added method provides a more defensible measure of school effectiveness based purely on test scores, but we note advantages and disadvantages of both approaches. This study highlights the consequences of several of the many choices facing policymakers in choosing a methodology for measuring school effectiveness.

Keywords

Accountability Educational policy School accountability Evaluation and assessment 

Notes

Funding information

We are grateful to the State Charter Schools Commission of Georgia and the Georgia Department of Education for funding the study that formed the basis for this paper.

References

  1. Abe, Y., Weinstock, P., Chan, V., Meyers, C., Gerdeman, R. D., & Brandt, W. C. (2015). How methodology decisions affect the variability of schools identified as beating the odds (REL 2015–071.REV). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Midwest. Retrieved from http://www.ies.ed.gov/ncee/edlabs. Accessed 21 Oct 2019
  2. Agasisti, T. & Minaya, V. (forthcoming). Evaluating the stability of school performance estimates for school choice: evidence for Italian primary schools. Fiscal Studies.Google Scholar
  3. Andrabi, T., Das, J., Khwaja, A. I., & Zajonc, T. (2011). Do value-added estimates add value? Accounting for learning dynamics. American Economic Journal: Applied Economics, 29-54.Google Scholar
  4. Bifulco, R., & Ladd, H. F. (2006). The impacts of charter schools on student achievement: evidence from North Carolina. Education Finance and Policy, 1(1), 50–90.CrossRefGoogle Scholar
  5. Booker, K., Gilpatric, S. M., Gronberg, T., & Jansen, D. (2007). The impact of charter school attendance on student performance. Journal of Public Economics, 91(5), 849–876.CrossRefGoogle Scholar
  6. Dumay, X., Coe, R., & Nkafu Anumendem, D. (2014). Stability over time of different methods of estimating school performance. School Effectiveness and School Improvement, 25(1), 64–82.CrossRefGoogle Scholar
  7. Ehlert, M., Koedel, C., Parsons, E., & Podgursky, M. (2014). Selecting growth measures for school and teacher evaluations: should proportionality matter? Education Policy, 1–36.Google Scholar
  8. Everson, K. (2017). Value-added modeling and educational accountability: are we asking the right questions? Review of Educational Research, 87(1), 35–70.CrossRefGoogle Scholar
  9. Goldhaber, D., Walch, J., & Gabele, B. (2014). Does the model matter? Exploring the relationship between different student achievement-based teacher assessments. Statistics and Public Policy, 1(1), 28–39.CrossRefGoogle Scholar
  10. Guarino, C., Reckase, M., & Wooldridge, J. (2015a Published online November 2014). Can Value-added Measures of Teacher Performance be Trusted? Education Finance and Policy, 10(1), 117–156.Google Scholar
  11. Guarino, C., Maxfield, M., Reckase, M., Thompson, P., and Wooldridge, J. (2015b). An Evaluation of Empirical Bayes’ Estimation of Value-Added Teacher Performance Measures. Journal of Educational and Behavioral Statistics, 40, 190–222.CrossRefGoogle Scholar
  12. Guarino, C., Reckase, M., Stacy, B., and Wooldridge, J. (2015c). A Comparison of Growth Percentile and Value-Added Models of Teacher Performance, Statistics and Public Policy, 2:1, e1034820.   https://doi.org/10.1080/2330443X.2015.1034820.CrossRefGoogle Scholar
  13. Kane, T. J., & Staiger, D. O. (2008). Estimating teacher impacts on student achievement: an experimental evaluation (No. w14607). National Bureau of Economic Research.Google Scholar
  14. Kane, T. J., McCaffrey, D. F., Miller, T., & Staiger, D. O. (2013). Have we identified effective teachers? Validating measures of effective teaching using random assignment. Seattle, WA: Bill and Melinda Gates Foundation.Google Scholar
  15. Koedel, C., & Betts, J. (2010). Value added to what? How a ceiling in the testing instrument influences value-added estimation. Education Finance and Policy, 5(1), 54–81.CrossRefGoogle Scholar
  16. Koon, S., Petscher, Y., & Foorman, B. R. (2014). Beating the odds: Finding schools exceeding achievement expectations with high-risk students (REL2014–032). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southeast. Retrieved from http://www.ies.ed.gov/ncee/edlabs.
  17. Leckie, G., & Goldstein, H. (2009). The limitations of using school league tables to inform school choice. J. R. Statist. Soc. A, 172(4), 835–851.CrossRefGoogle Scholar
  18. Leckie, G., & Goldstein, H. (2017). The evolution of school league tables in England 1992–2016: ‘Contextual value-added’, ‘expected progress’ and ‘progress 8’. British Educational Research Journal, 43(2), 193–212.CrossRefGoogle Scholar
  19. Lockwood, J. R., McCaffrey, D. F., Hamilton, L. S., Stecher, B., Le, V. N., & Martinez, J. F. (2007). The sensitivity of value-added teacher effect estimates to different mathematics achievement measures. Journal of Educational Measurement, 44(1), 47–67.CrossRefGoogle Scholar
  20. Marks, G. (2015). The size, stability, and consistency of school effects: evidence from Victoria. School Effectiveness and School Improvement, 26(3), 397–414.CrossRefGoogle Scholar
  21. McCaffrey, D. F., Lockwood, J. R., Koretz, D., Louis, T. A., & Hamilton, L. (2004). Models for value-added modeling of teacher effects. Journal of Educational and Behavioral Statistics, 29(1), 67–101.CrossRefGoogle Scholar
  22. Meyers, C. V., & Wan, Y. (2016). A comparison of two methods of identifying beating-the odds high schools in Puerto Rico (REL 2017–167). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Northeast &Islands. Retrieved from http://www.ies.ed.gov/ncee/edlabs. Accessed 21 Oct 2019
  23. Nye, B., Konstantopoulos, S., & Hedges, L. V. (2004). How large are teacher effects? Educational Evaluation and Policy Analysis, 26(3), 237–257.CrossRefGoogle Scholar
  24. Page, G., San Martin, E., Orellana, J., & Gonzalez, J. (2017). Exploring complete school effectiveness via quantile value-added. J. R. Statist. Soc. A, 180(1), 315–340.CrossRefGoogle Scholar
  25. Partridge, M. A., & Koon, S. (2017). Beating the odds in Mississippi: identifying schools exceeding achievement expectations (REL 2017–213). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southeast. Retrieved from http://www.ies.ed.gov/ncee/edlabs.
  26. Perry, T. (2016). English value-added measures: examining the limitations of school performance measurement. British Educational Research Journal, 42(6), 1056–1080.CrossRefGoogle Scholar
  27. Reardon, S., & Raudenbush, S. (2009). Assumptions of value-added models for estimating school effects. Education Finance and Policy, 4(4), 492–519.CrossRefGoogle Scholar
  28. Sass, T. R. (2006). Charter schools and student achievement in Florida. Education Finance and Policy, 1(1), 91–122.CrossRefGoogle Scholar
  29. Solmon, L., Paark, K., & Garcia, D. (2001). Does charter school attendance improve test scores? The Arizona results. Google Scholar
  30. Stacy, B., Guarino, C., and Wooldridge, J. (2018). Does the Precision and Stability of Value-Added Estimates of Teacher Performance Depend on the Types of Students They Serve? Economics of Education Review, 64, 50–74.CrossRefGoogle Scholar
  31. Timmermans, A., Doolard, S., de Wolf, I. (2011) Conceptual and empirical differences among various value-added models for accountability, School Effectiveness and School Improvement, 393–413.CrossRefGoogle Scholar
  32. Walsh, E. & Isenberg, E. (2015). How does a value-added model compare to the Colorado growth model? Statistics and Public Policy. Google Scholar
  33. Wilson, D., & Piebalga, A. (2008). Performance measures, ranking and parental choice: an analysis of the english school league tables. International Public Management Journal, 11(3), 344–366.CrossRefGoogle Scholar
  34. Wooldridge, J. (2009). Introductory economics: A modern approach, Edition 4e. South-Western Cengage Learning: Mason, OH, USA.Google Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.University of California RiversideRiversideUSA
  2. 2.Michigan State UniversityEast LansingUSA

Personalised recommendations