Quis custiodet ipsos custodes? How to detect and correct teacher cheating in Italian student data

  • Sergio Longobardi
  • Patrizia Falzetti
  • Margherita Maria Pagliuca
Original Paper
  • 87 Downloads

Abstract

The increasing diffusion of standardized assessments of students’ competences has been accompanied by an increasing need to make reliable data available to all stakeholders of the educational system (policy makers, teachers, researchers, families and students). In this light, we propose a multistep approach to detect and correct teacher cheating, which decreases the quality of student data offered by the Italian Institute for the Educational Evaluation of Instruction and Training. Our method integrates the “mechanistic” logic of the fuzzy clustering technique with a statistical model-based approach, and it aims to improve the detection of cheating and to correct test scores at both the class and student level. The results show a normalization of the scores and a stronger correction on data for Southern regions, where the propensity to cheat appears to be highest.

Keywords

Data quality Cheating Students assessment Multilevel model 

Notes

Acknowledgments

We are extremely grateful to Paolo Sestito (Bank of Italy) for his conceptual and theoretical guidance. We are indebted to Roberto Ricci (INVALSI), Giovanni De Luca (University of Naples “Parthenope”) and Federica Gioia (University of Naples “Parthenope”) for helpful comments and discussions. The authors also thank the editor and the two anonymous referees for their valuable suggestions.

References

  1. Agasisti T, Longobardi S (2014) Inequality in education: can Italian disadvantaged students close the gap? J Behav Exp Econ 52:8–20CrossRefGoogle Scholar
  2. Ahn T, Vigdor J (2014) The impact of No Child Left Behind’s accountability sanctions on school performance: regression discontinuity evidence from North Carolina. NBER WP 20511Google Scholar
  3. Angrist J, Battistin E, Vuri D (2017) In a small moment: class size and moral hazard in the Italian mezzogiorno. Am Econ J Appl Econ 9(4):216–249CrossRefGoogle Scholar
  4. Apperson J, Bueno C, Sass TR (2016) Do the cheated ever prosper? The long-run effects of test-score manipulation by teachers on student outcomes. in: CALDER working paper 155Google Scholar
  5. Bertoni M, Brunello G, Rocco L (2013) When the cat is near the mice won’t play: the effect of external examiners in Italian schools. J Publ Econ 104:65–77CrossRefGoogle Scholar
  6. Borra C, Sevilla A, Gershuny JI (2013) Calibrating time use estimates for the British household panel survey. Soc Indic Res 114(3):1211–1224CrossRefGoogle Scholar
  7. Bratti M, Checchi D, Filippin A (2007) Geographical differences in Italian students’ mathematical competencies: evidence from PISA 2003. Giornale degli Economisti e Annali di Economia 66(3):299–333Google Scholar
  8. Breiman L, Friedman J, Olshen R, Stone C (1984) Classification and regression trees. Wadsworth, BelmontMATHGoogle Scholar
  9. Brunello G, Checchi D (2007) Does school tracking affect equality of opportunity? New international evidence. Econ Policy 22:781–861CrossRefGoogle Scholar
  10. Cohodes S (2016) Teaching to the student: charter school effectiveness in spite of perverse incentives. Educ Finance Policy 11(1):1–42CrossRefGoogle Scholar
  11. Dee TS, Dobbie W, Jacob BA, Rockoff J (2016) The causes and consequences of test score manipulation: evidence from the New York regents examinations. NBER WP 22165Google Scholar
  12. Diamond R, Persson P (2016) The long-term consequences of teacher discretion in grading of high-stakes tests. NBER WP 22207Google Scholar
  13. Ferrer-Esteban G (2013) Rationale and incentives for cheating in the standardized tests of the Italian assessment system. FGA working paper, Giovanni Agnelli Foundation (Turin)Google Scholar
  14. Figlio DN (2006) Testing, crime and punishment. J Public Econ 90(4–5):837–851CrossRefGoogle Scholar
  15. Figlio DN, Getzler LS (2006) Accountability, Ability, and Disability: Gaming the System? In: Gronberg TG, Jansen DW (eds) Advances in applied microeconomics, 14. Elsevier Science Press, Oxford, pp 35–49Google Scholar
  16. Fryer RG Jr, Levitt SD, List JA, Sadoff S (2012) Enhancing the efficacy of teacher incentives through loss aversion: A field experiment. NBER working paper 18237Google Scholar
  17. Fuchs T, Woessmann L (2007) What accounts for international differences in student performance? A re-examination using PISA data. Empir Econ 32(2/3):433–464CrossRefGoogle Scholar
  18. Gimenez-Nadal JI, Molina JA (2013) Parents’ education as a determinant of educational childcare time. J Popul Econ 26(2):719–749CrossRefGoogle Scholar
  19. Hastie T, Tibshirani R, Friedman J (2009) The elements of statistical learning: data mining, inference, and prediction. Springer Series in Statistics, New YorkCrossRefMATHGoogle Scholar
  20. Hussain I (2012) Subjective performance evaluation in the public sector: evidence from school inspections. CEE discussion paper 135, London School of EconomicsGoogle Scholar
  21. INVALSI (2012). Rilevazioni nazionali sugli apprendimenti 2011–2012. Invalsi (Roma)Google Scholar
  22. Jacob BA (2005) Accountability, incentives and behavior: the impact of high-stakes testing in the Chicago Public Schools. J Public Econ 89:761–796CrossRefGoogle Scholar
  23. Jacob BA, Levitt SD (2003) Rotten apples: an investigation of the prevalence and predictors of teacher cheating. Q J Econ 118(3):843–877CrossRefGoogle Scholar
  24. Kotz S (2006) Handbook of beta distribution and its applications. Biometrics 62:309–310CrossRefGoogle Scholar
  25. Lavy V (2009) Performance pay and teachers’ effort, productivity, and grading ethics. Am Econ Rev 99(5):1979–2011CrossRefGoogle Scholar
  26. Lazear PE (2006) Speeding, terrorism and teaching to the test. Q J Econ 121(3):1029–1061CrossRefGoogle Scholar
  27. Lucifora C, Tonello M (2015) Cheating and social interactions. Evidence from a randomized experiment in a national evaluation program. J Econ Behav Organ 115(C):45–66CrossRefGoogle Scholar
  28. Lucifora C, Tonello M (2016) Monitoring and sanctioning cheating at school: what works? Evidence from a national evaluation program, DISCE-working papers, 51, Dipartimento di Economia e FinanzaGoogle Scholar
  29. Martinelli C, Parker SW, Pérez-Gea AC, Rodrigo R (2015) Cheating and incentives: learning from a policy experiment. Interdisciplinary Center for Economic Science, George Mason University, working paper, October 2015Google Scholar
  30. Mechtenberg L (2009) Cheap talk in the classroom: how biased grading at school explains gender differences in achievements, career choices and wages. Rev Econ Stud 76:1431–1459CrossRefMATHGoogle Scholar
  31. Neal D, Schanzenbach DW (2010) Left behind by design: proficiency counts and test-based accountability. Rev Econ Stat 92:263–283CrossRefGoogle Scholar
  32. Paccagnella M, Sestito P (2014) School cheating and social capital. Educ Econ 22(4):367–388.  https://doi.org/10.1080/09645292.2014.904277 CrossRefGoogle Scholar
  33. Pereda-Fernández S (2016) A new method for the correction of test scores manipulation. Working paper no. 1047, Bank of ItalyGoogle Scholar
  34. Quintano C, Castellano R, Longobardi S (2009) A fuzzy clustering approach to improve the accuracy of Italian student data. Stat Appl 7(2):149–171Google Scholar
  35. Quintano C, Castellano R, Longobardi S (2012) The effects of socioeconomic background and test-taking motivation on Italian students’ achievement. In: Di Ciaccio A, Coli M, Ibañez JMA (eds) Advanced statistical methods for the analysis of large data-sets. Springer, Berlin, pp 1–484Google Scholar
  36. Raudenbush SW, Bryk AS (1992) Hierarchical linear models. Sage, Newbury ParkGoogle Scholar
  37. Rosembaum P, Rubin D (1983) The central role of the propensity score in observational studies for casual effects. Biometrika 70:41–50MathSciNetCrossRefGoogle Scholar
  38. Snijders T, Bosker R (1999) Multilevel analysis. Sage Publications, LondonMATHGoogle Scholar
  39. Wesolowsky G (2000) Detecting excessive similarity in answers on multiple choice exams. J Appl Stat 27(7):909–921CrossRefGoogle Scholar
  40. Zhao Z (2008) Sensitivity of propensity score methods to the specifications. Econ Lett 98(3):309–319CrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Quantitative and Business StudiesUniversity of Naples “Parthenope”NaplesItaly
  2. 2.Italian Institute for the Educational Evaluation of Instruction and Training (INVALSI)RomeItaly

Personalised recommendations