Journal of Productivity Analysis

, Volume 49, Issue 1, pp 1–15 | Cite as

Using DEA for measuring teachers’ performance and the impact on students’ outcomes: evidence for Spain

  • Daniel Santín
  • Gabriela Sicilia


This research contributes to the ongoing debate about differences in teachers’ performance. We introduce a new methodology that combines production frontier and impact evaluation insights that allows using DEA as an identification strategy of a treatment with high and low quality teachers within schools to assess their performance. We use a unique database of primary schools in Spain that, for every school, supplies information on two classrooms at 4th grade where students and teachers were randomly assigned into the two classrooms. We find considerable differences in teachers’ efficiency across schools with significant effects on students’ achievement. In line with previous findings, we find that neither teacher experience nor academic training explains teachers’ efficiency. Conversely, being a female teacher, having worked five or more years in the same school or having smaller class sizes positively affects the performance of teachers.


Teachers’ performance Efficiency DEA Causal inference Primary education 

JEL classification

I21 C14 



We thank two anonymous referees for helpful discussions and suggestions. Research support from the Fundación Ramón Areces is acknowledged by the authors. Gabriela Sicilia thanks financial support received from the Agencia Nacional de Investigación e Innovación.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no competing interests.


  1. Aaronson D, Barrow L, Sander W (2007) Teachers and student achievement in the Chicago public high schools. J Labor Econ 25:95–135CrossRefGoogle Scholar
  2. Akerhielm K (1995) Does class size matter? Econ Educ Rev 14:229–241CrossRefGoogle Scholar
  3. Angrist JD, Lavy V (1999) Using Maimonides’ rule to estimate the effect of class size on scholastic achievement. Q J Econ 114:533–575CrossRefGoogle Scholar
  4. Angrist JD, Lavy V, Leder-Luis J, and Shany A (2017). Maimonides Rule Redux (No. w23486). National Bureau of Economic Research.Google Scholar
  5. Banker RD, Charnes A, Cooper WW (1984) Some models for estimating technical and scale inefficiencies in data envelopment analysis. Manage Sci 30:1078–1092CrossRefGoogle Scholar
  6. Barro RJ, Lee JW (1996) International measures of schooling years and schooling quality. Am Econ Rev 86:218–223Google Scholar
  7. Bietenbeck J (2014) Teaching practices and cognitive skills. Labour Econ 30:143–153CrossRefGoogle Scholar
  8. Boozer M, Rouse C (2001) Intraschool variation in class size: patterns and implications. J Urban Econ 50:163–189CrossRefGoogle Scholar
  9. Charnes A, Cooper WW, Rhodes E (1978) Measuring the efficiency of decision making units. Eur J Oper Res 2:429–444CrossRefGoogle Scholar
  10. Charnes A, Cooper WW, Rhodes E (1981) Evaluating program and managerial efficiency: an application of data envelopment analysis to program follow through. Manage Sci 27:668–697CrossRefGoogle Scholar
  11. Chetty R, Friedman JN, and Rockoff JE (2011). The long-term impacts of teachers: teacher value-added and student outcomes in adulthood. NBER: wp17699, National Bureau of Economic Research.Google Scholar
  12. Chingos MM (2013) Class size and student outcomes: research and policy implications. J Policy Anal Manag 32:411–438CrossRefGoogle Scholar
  13. Chudgar A, Sankar V (2008) The relationship between teacher gender and student achievement: evidence from five Indian states. Compare 38:627–642CrossRefGoogle Scholar
  14. Clotfelter CT, Ladd HF, Vigdor JL (2006) Teacher-student matching and the assessment of teacher effectiveness. J Hum Resour. 41:778–820CrossRefGoogle Scholar
  15. Clotfelter CT, Ladd HF, Vigdor JL (2007) Teacher credentials and student achievement: longitudinal analysis with student fixed effects. Econ Educ Rev 26:673–682CrossRefGoogle Scholar
  16. Coleman JS, Campbell EQ, Hobson CJ, McPartland J, Mood AM, Weinfeld FD, York R (1966) Equality of educational opportunity. Washington DC, 1066-5684.Google Scholar
  17. Cooper ST, Cohn E (1997) Estimation of a frontier production function for the South Carolina educational process. Econ Educ Rev 16:313–327CrossRefGoogle Scholar
  18. Cordero JM, Cristobal V, & Santín D (2017). Causal inference on education policies: a survey of empirical studies using PISA, TIMSS and PIRLS. J Econ Surv doi:
  19. Cordero JM, Santín D, Sicilia G (2015) Testing the accuracy of DEA estimates under endogeneity through a Monte Carlo simulation. Eur J Oper Res 244:511–518CrossRefGoogle Scholar
  20. Dee TS (2005) A teacher like me: does race, ethnicity, or gender matter? Am Econ Rev 95(2):158–165CrossRefGoogle Scholar
  21. Dee TS (2007) Teachers and the gender gaps in student achievement. J Hum Resour 42(3):528–554CrossRefGoogle Scholar
  22. Dee TS, Wyckoff J (2015) Incentives, selection, and teacher performance: evidence from IMPACT. J Policy Anal Manag 34:267–297CrossRefGoogle Scholar
  23. De la Fuente A (2011) Human capital and productivity. Nordic Econ Policy Rev 2:103–132Google Scholar
  24. De Witte K, Rogge N (2011) Accounting for exogenous influences in performance evaluations of teachers. Econ Educ Rev 30:641–653CrossRefGoogle Scholar
  25. De Witte K, Van Klaveren C (2014) How are teachers teaching? A nonparametric approach. Educ Econ 22:3–23CrossRefGoogle Scholar
  26. Escardíbul JO, Mora T (2013) Teacher gender and student performance in mathematics. Evidence from Catalonia (Spain). J Educ Train Stud 1:39–46CrossRefGoogle Scholar
  27. Gordon RJ, Kane TJ, Staiger D (2006) Identifying effective teachers using performance on the job. Brookings Institution, Washington DCGoogle Scholar
  28. Hanushek EA (1979) Conceptual and empirical issues in the estimation of educational production functions. J Hum Resour 14:351–388CrossRefGoogle Scholar
  29. Hanushek EA (1997) Assessing the effects of school resources on student performance: An update. Educ Eval Policy Anal 19:141–164CrossRefGoogle Scholar
  30. Hanushek EA, Kimko DD (2000) Schooling, labor-force quality, and the growth of nations. Am Econ Rev 90:1184–1208CrossRefGoogle Scholar
  31. Hanushek, E. A. (2003) The Failure of Input-based Schooling Policies. The Economic Journal 113:64–98Google Scholar
  32. Hanushek EA, Rivkin SG (2006) Teacher quality. In: Hanusheck EA, Welch F (eds) Handbook of the Economics of Education. 2. North-Holland, AmsterdamGoogle Scholar
  33. Hanushek EA, Rivkin SG (2010) Generalizations about using value-added measures of teacher quality. Am Econ Rev 100:267–271CrossRefGoogle Scholar
  34. Hanushek EA, Rivkin SG (2012) The distribution of teacher quality and implications for policy. Annu Rev Econ 4:131–157CrossRefGoogle Scholar
  35. Hanushek EA, Woessmann L (2012) Do better schools lead to more growth? Cognitive skills, economic outcomes, and causation. J Econ Growth 17:267–321CrossRefGoogle Scholar
  36. Heckman JJ, Kautz T (2013). Fostering and measuring skills: Interventions that improve character and cognition. NBER wp19656. National Bureau of Economic Research.Google Scholar
  37. Hoxby CM (1999) The productivity of schools and other local public goods producers’. J Public Econ 74:1–30CrossRefGoogle Scholar
  38. Hoxby CM (2000) The effects of class size on student achievement: New evidence from population variation. Q J Econ 115:1239–1285CrossRefGoogle Scholar
  39. INEE. (2010). Evaluación general de diagnóstico 2009. Educación primaria. Cuarto curso. Informe de resultados. Ministerio de Educación. Madrid.Google Scholar
  40. Kane TJ, Rockoff JE, Staiger DO (2008) What does certification tell us about teacher effectiveness? Evidence from New York City. Econ Educ Rev 27:615–631CrossRefGoogle Scholar
  41. Kane TJ, Staiger DO (2008). Estimating teacher impacts on student achievement: an experimental evaluation. NBER wp14607. National Bureau of Economic Research.Google Scholar
  42. Koedel C, Mihaly K, Rockoff JE (2015) Value-added modeling: a review. Econ Educ Rev 47:180–195CrossRefGoogle Scholar
  43. Krieg JM (2005) Student gender and teacher gender: what is the impact on high stakes test scores. Curr Issues Educ 8:1–16Google Scholar
  44. Levin HM (1974) Measuring efficiency in educational production’. Public Finan Q 2:3–24CrossRefGoogle Scholar
  45. Peyrache A, Coelli T (2009) Testing procedures for detection of linear dependencies in efficiency models. Eur J Oper Res 198(2):647–654CrossRefGoogle Scholar
  46. Rivkin SG, Hanushek EA, Kain JF (2005) Teachers, schools, and academic achievement. Econometrica 73:417–458CrossRefGoogle Scholar
  47. Rockoff JE (2004) The impact of individual teachers on student achievement: evidence from panel data. Am Econ Rev 94:247–252CrossRefGoogle Scholar
  48. Rothstein J (2010) Teacher quality in educational production: Tracking, decay, and student achievement. Q J Econ 125:175–214CrossRefGoogle Scholar
  49. Sacerdote B (2001) Peer effects with random assignment: results for dartmouth roommates. Q J Econ 116:681–704CrossRefGoogle Scholar
  50. Santín D, Sicilia G (2017) Dealing with endogeneity in data envelopment analysis applications. Expert Syst Appl 68:173–184CrossRefGoogle Scholar
  51. Schlotter M, Schwerdt G, Woessmann L (2011) Econometric methods for causal evaluation of education policies and practices: a non‐technical guide. Educ Econ 19:109–137CrossRefGoogle Scholar
  52. Slater H, Davies NM, Burgess S (2012) Do teachers matter? measuring the variation in teacher effectiveness in England. Oxf Bull Econ Stat 74:629–645CrossRefGoogle Scholar
  53. Webbink D (2005) Causal effects in education. J Econ Surv 19:535–560CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2017

Authors and Affiliations

  1. 1.Department of Applied Economics VIComplutense University of MadridMadridSpain
  2. 2.Department of Economics and Public FinanceUniversidad Autónoma de MadridMadridSpain

Personalised recommendations