Advertisement

Empirical Economics

, Volume 56, Issue 1, pp 107–135 | Cite as

Does the educational management model matter? New evidence from a quasiexperimental approach

  • María Jesús MancebónEmail author
  • Domingo P. Ximénez-de-Embún
  • Mauro Mediavilla
  • José María Gómez-Sancho
Article
  • 154 Downloads

Abstract

A growing literature has appeared in the last 2 decades with the aim to explore if the way in which publicly funded private schools are managed (a very autonomous mode) is more effective, than that applied in public schools (where decisions are highly centralized), concerning the promotion of student’s educational skills. Our paper contributes to this literature providing new evidence from the Spanish experience. To this end, we use the Spanish Assessment named “Evaluación de Diagnóstico,” a national yearly standardized test given to students in the fourth grade and administered by the Regional Educational Authorities. In particular, our data are those corresponding to the assessment conducted in the Spanish region of Aragón in 2010. Our methodological strategy is defined by the sequential application of two methods: propensity score matching and hierarchical linear models. Additionally, the sensitivity of our estimates is also tested with respect to unobserved heterogeneity. Our results underline the existence of a slight advantage of the private management model of schools in the promotion of scientific abilities of students and in the acquisition of foreign language (English) skills.

Keywords

School choice Propensity score matching Hierarchical linear models Unobservable variables bias Science and Foreign Language (English)  skills Primary schools 

JEL Classification

I21 I29 

References

  1. Allen R, Vignoles A (2015) Can school competition improve standards? The case of faith schools in England. Empir Econ 50(3):959–973Google Scholar
  2. Altonji J, Elder T, Taber C (2005) Selection on observed and unobserved variables: assessing the effectiveness of Catholic schools. J Polit Econ 113(1):151–184Google Scholar
  3. Altonji J, Elder T, Taber C (2008) Using selection on observed variables to assess bias from unobservables when evaluating Swan–Ganz catheterization. Am Econ Rev 98(2):345–350Google Scholar
  4. Austin P (2011) An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivar Behav Res 46(3):399–424Google Scholar
  5. Böckerman P, Bryson A, Ilmakunnas P (2013) Does high involvement management lead to higher pay? J R Stat Soc Ser A Stat Soc 176(4):861–885Google Scholar
  6. Bernal J (2005) Parental choice, social class and market forces: the consequences of privatization of public services in education. J Educ Policy 20(6):779–792Google Scholar
  7. Bettinger E (2011) Educational vouchers in international contexts. In: Hanushek E, Machin S, Woessmann L (eds) Handbook of the economics of education, vol 4. Elsevier Science & Technology, North-Holland, pp 551–572Google Scholar
  8. Binder M, Coad A (2013) Life satisfaction and self-employment: a matching approach. Small Bus Econ 40(4):1009–1033Google Scholar
  9. Bradley S, Migali G, Taylor J (2013) Funding, school specialisation and test scores: an evaluation of the specialist schools policy using matching models. J Hum Cap 7(1):76–106Google Scholar
  10. Bryk A, Raudenbusch S (1988) Toward a more appropriate conceptualization of research on school effects: a three-level hierarchical linear model. Am J Educ 97(1):65–108Google Scholar
  11. Caliendo M, Künn S (2015) Getting back into the labor market: the effects of start-up subsidies for unemployed females. J Popul Econ 28(4):1005–1043Google Scholar
  12. Caliendo M, Kopeinig S (2008) Some practical guidance for the implementation of propensity score matching. J Econ Surv 22(1):31–72Google Scholar
  13. Cheah B (2009) Clustering standard errors for modeling multilevel data. Technical report, Columbia University, New YorkGoogle Scholar
  14. Chowa G, Masa R, Wretman C, Ansong D (2013) The impact of household possessions on youth’s academic achievement in the Ghana Youthsave experiment: a propensity score analysis. Econ Educ Rev 33:69–81Google Scholar
  15. Chudgar A, Quin E (2012) Relationship between private schooling and achievement: results from rural and urban India. Econ Educ Rev 31(4):376–390Google Scholar
  16. Cohen J (1988) Statistical power analysis for the behavioral sciences, 2nd edn. Erlbaum Associates, LawrenceGoogle Scholar
  17. Coleman J, Hoffer T, Kilgore S (1982) Secondary school achievement. Public, catholic and private schools compared. Basic Books, Inc. Publishers, New YorkGoogle Scholar
  18. Crespo E, Santín D (2014) Does school ownership matter? An unbiased efficiency comparison for regions of Spain. J Prod Anal 41(1):153–172Google Scholar
  19. Davies S (2013) Are there Catholic school effects in Ontario, Canada? Eur Sociol Rev 29(4):871–883Google Scholar
  20. DiPrete T, Gangl M (2004) Assessing bias in the estimation of causal effects: Rosenbaum bounds on matching estimators and instrumental variables estimation with imperfect instruments. WZB discussion paper SP I 2004-101, Wissenschaftszentrum Berlin für SozialforschungGoogle Scholar
  21. Doncel L, Sainz J, Sanz I (2012) An estimation of the advantage of charter over public schools. Kyklos 65(4):442–463Google Scholar
  22. Epple D, Romano R, Zimmer R (2016) Charter schools: a survey of research on their characteristics and effectiveness. In: Hanushek E, Machin S, Woessmann L (eds) Handbook of the economics of education, vol 5. Elsevier Science & Technology, North-Holland, pp 139–208Google Scholar
  23. Escardibul JO, Villarroya A (2009) The inequalities in school choice in Spain in accordance to PISA data. J Educ Policy 24(6):673–695Google Scholar
  24. Gangl M (2014) Matching estimators for treatment effects. In: Best H, Wolf C (eds) The SAGE handbook of regression analysis and causal inference. SAGE Publications Ltd, London, pp 251–276Google Scholar
  25. Gelman A (2006) Multilevel (hierarchical) modeling: what it can and cannot do. Technometrics 48(3):432–435Google Scholar
  26. Green C, Navarro-Paniagua M, Ximénez de Embún D, Mancebón M (2014) School choice and student wellbeing. Econ Educ Rev 38:139–150Google Scholar
  27. Gronberg T, Jansen D (2001) Navigating newly chartered waters. An analysis of charter school performance. Texas Public Policy Foundation, AustinGoogle Scholar
  28. Guo S, Fraser M (2010) Propensity score analysis. Statistical methods and applications. SAGE publications Ltd., LondonGoogle Scholar
  29. Hanushek E, Woessmann L (2014) Institutional structures of the education system and student achievement: a review of cross-country economic research. In: Strietholt R, Bos W, Gustafsson JE, Rosen M (eds) Educational policy evaluation through international comparative assessments. Waxmann, MunsterGoogle Scholar
  30. Hanushek E, Kain J, Rivkin S, Branch F (2007) Charter school quality and parental decision making with school choice. J Public Econ 91(5–6):823–848Google Scholar
  31. Herron M (1999) Postestimation uncertainty in limited dependent variable models. Polit Anal 8(1):83–98Google Scholar
  32. Hirano K, Imbens G, Ridder G (2003) Efficient estimation of average treatment effects using the estimated propensity score. Econometrica 71(4):1161–1189Google Scholar
  33. Ichino A, Mealli F, Nannicini T (2008) From temporary help jobs to permanent employment: what can we learn from matching estimators and their sensitivity? J Appl Econ 23(3):305–327Google Scholar
  34. Imbens GW (2004) Nonparametric estimation of average treatment effects under exogeneity: a review. Rev Econ Stat 86(1):4–29Google Scholar
  35. Kim Y (2011) Catholic schools or school quality? The effects of Catholic schools on labor market outcomes. Econ Educ Rev 30(3):546–558Google Scholar
  36. Lee BK, Lessler J, Stuart EA (2010) Improving propensity score weighting using machine learning. Stat Med 29(3):337–346Google Scholar
  37. Lee M, Lee S (2009) Sensitivity analysis of job-training effects on reemployment for Korean women. Empir Econ 36(1):81–107Google Scholar
  38. Lefebvre P, Merrigan P, Verstraete M (2011) Public subsidies to private schools do make a difference for achievement in mathematics: longitudinal evidence from Canada. Econ Educ Rev 30(1):79–98Google Scholar
  39. Lehrer S, Kordas G (2013) Matching using semiparametric propensity scores. Empir Econ 44(1):13–45Google Scholar
  40. LODE (1985) Organic Law 8/1985, 3 July, Regulating education. Official Spanish State Bulletin 159Google Scholar
  41. Mancebón M, Ximénez-de-Embún DP (2014) Equality of school choice: a study applied to the Spanish region of Aragón. Educ Econ 22(1):90–111Google Scholar
  42. Mancebón M, Calero J, Choi A, Ximénez-de-Embún DP (2012) The efficiency of public and publicly-subsidized high schools in Spain. Evidence from PISA-2006. J Oper Res Soc 63(11):1516–1533Google Scholar
  43. McCaffrey D, Ridgeway G, Morral A (2004) Propensity score estimation with boosted regression for evaluating causal effects in observational studies. Psychol Methods 9(44):403–425Google Scholar
  44. Moulton B (1990) An illustration of a pitfall in estimating the effects of aggregate variables on micro units. Rev Econ Stat 72(2):334–338Google Scholar
  45. Murname R, Willett J (2011) Methods matter. Oxford University Press, New YorkGoogle Scholar
  46. Peel M (2014) Addressing unobserved endogeneity bias in accounting studies: control and sensitivity methods by variable type. Account Bus Res 44(5):545–571Google Scholar
  47. Rehm P (2005) Citizen support for the welfare state: determinants of preferences for income redistribution. WZB markets and political economy working paper SP II 2, Wissenschaftszentrum Berlin für SozialforschungGoogle Scholar
  48. Rosenbaum P (2002) Observational studies. Springer, New YorkGoogle Scholar
  49. Rosenbaum P, Rubin D (1983) The central role of the propensity score in observational studies for causal effects. Biometrika 70(1):41–55Google Scholar
  50. Rosenbaum P, Rubin D (1985) The bias due to incomplete matching. Biometrics 41(1):103–116Google Scholar
  51. Rubin D, Thomas N (2000) Combining propensity score matching with additional adjustments for prognostic covariates. J Am Stat Assoc 95(450):573–585Google Scholar
  52. Setoguchi S, Schneeweiss S, Brookhart MA, Glynn RJ, Cook EF (2008) Evaluating uses of data mining techniques in propensity score estimation: a simulation study. Pharmacoepidemiol Drug Saf 17(6):546–555Google Scholar
  53. Smith H, Tood P (2005) Does matching overcome LaLonde’s critique of non-experimental estimators? J Econ 125(1–2):305–353Google Scholar
  54. Somers M, McEwan P, Willms J (2004) How effective are private schools in Latin America? Comp Educ Rev 48(1):48–69Google Scholar
  55. Stuart E (2010) Matching methods for causal inference: a review and a look forward. Stat Sci 25(1):1–21Google Scholar
  56. Spanish Ministry of Education (2013) Spanish Education Statistics. Madrid. 2013. https://www.mecd.gob.es
  57. Urquiola M (2016) Competition among schools. Traditional public and private schools. In: Hanushek E, Machin S, Woessmann L (eds) Handbook of the economics of education, vol 5. Elsevier Science & Technology, North-Holland, pp 209–237Google Scholar
  58. Willms J (2006) Learning divides: ten policy questions about the performance and equity of schools and schooling systems. UIS working paper 5. UNESCO Institute for Statistics, MontrealGoogle Scholar
  59. Yitzhaki S (1996) On using linear regressions in welfare economics. J Bus Econ Stat 14(4):478–486Google Scholar
  60. Zimmer B R Gill, Booker K, Lavertu S, Witte J (2012) Examining charter student achievement effects across seven states. Econ Educ Rev 31(2):213–224Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Applied EconomicsUniversity of ZaragozaZaragozaSpain
  2. 2.Department of Economic AnalysisUniversity of ZaragozaZaragozaSpain
  3. 3.Department of Applied EconomicsUniversity of ValenciaValenciaSpain

Personalised recommendations