Skip to main content

Advertisement

Log in

Merging the accountability and scientific research requirements of the No Child Left Behind Act: using cohort control groups

  • Published:
Quality & Quantity Aims and scope Submit manuscript

Abstract

This article shows how assessment data such as that mandated by the No Child Left Behind Act can be used to examine the effectiveness of educational interventions and meet the Act’s mandate for “scientifically based research.” Based on the classic research design literature a cohort control group and a cohort control group with historical comparisons design are suggested as internally valid analyses. The logic of the “grounded theory of generalized causal inference” is used to develop externally valid results. The procedure is illustrated with published data regarding the Reading Mastery curriculum. Empirical results are comparable to those obtained in meta-analyses of the curriculum, with effect sizes surpassing the usual criterion for educational importance. Implications for school officials and policy makers are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Agaodino R., Dynarski M.: Are experiments the only option? A look at dropout prevention programs. Rev. Econ. Stat. 86, 180–194 (2004)

    Article  Google Scholar 

  • Boruch R.: Better evaluation for evidence-based policy: place randomized trials in education, criminology, welfare, and health. Ann. AAPS 599(May), 6–18 (2005)

    Google Scholar 

  • Brunswick E.: Perception and the Representative Design of Psychological Experiments, 2nd edn. University of California Press, Berkeley (1956)

    Google Scholar 

  • Campbell D.T.: Relabeling internal and external validity for applied social scientists. In: Trochim, W.M.K. (ed.) Advances in Quasi-Experimental Design and Analysis, pp. 67–77. Jossey-Bass, San Francisco (1986)

    Google Scholar 

  • Campbell D.T., Stanley J.C.: Experimental and Quasi-Experimental Designs for Research. Rand McNally, Chicago (1963)

    Google Scholar 

  • Cartwright N.: Are RCTs the gold standard?. BioSocieties 2, 11–20 (2007)

    Article  Google Scholar 

  • Clay R.A.: More than one way to measure. Monit. Psychol. 41(September), 52–55 (2010)

    Google Scholar 

  • Cook T.D.: The generalization of causal connections: multiple theories in search of clear practice. In: Sechrest, L., Perrin, E., Bunker, J. (eds.) Research Methodology: Strengthening Causal Interpretations of Nonexperimental Data (DHHS Publication No. PHS 90-3454)., pp. 9–31. Department of Health and Human Services, Rockville (1990)

    Google Scholar 

  • Cook T.D.: Clarifying the warrant for generalized causal inferences in quasi-experimentation. In: McLaughlin, M.W., Phillips, D.C. (eds.) Evaluation and education: At Quarter-Century, pp. 115–144. National Society for the Study of Education, Chicago (1991)

    Google Scholar 

  • Cook T.D.: Randomized experiments in educational policy research: a critical examination of the reasons the educational evaluation community has offered for not doing them. Educ. Eval. Policy Anal. 24, 175–199 (2002)

    Article  Google Scholar 

  • Cook T.D.: Why have educational evaluators chosen not to do randomized experiments. Ann. AAPS 589, 114–149 (2003)

    Google Scholar 

  • Cook T.D.: Emergent principles for the design, implementation, and analysis of cluster-based experiments in social science. Ann. AAPS 599(May), 176–198 (2005)

    Google Scholar 

  • Cook T.D., Campbell D.T.: Quasi-Experimentation: Design and Analysis Issues for Field Settings. Rand McNally, Chicago (1979)

    Google Scholar 

  • Cook T.D., Steiner P.M.: Case matching and the reduction of selection bias in quasi-experiments: the relative importance of pretest measures of outcome, of unreliable measurement, and of mode of data analysis. Psychol. Methods 15, 56–58 (2010)

    Article  Google Scholar 

  • Cook T.D., Scriven M., Coryn C.L.S., Evergreen S.D.H.: Contemporary thinking about causation in evaluation: a dialogue with Tom Cook and Michael Scriven. Am. J. Eval. 31, 105–117 (2010)

    Article  Google Scholar 

  • Collins M., Carnine D.: Evaluating the field test revision process by comparing two versions of a reasoning skills CAI program. J. Learn. Disabil. 21, 375–379 (1988)

    Article  Google Scholar 

  • Couglin, C.: (forthcoming) A review of the direct instruction literature: a four decade program of research. In: John, L. (ed.). Direct Instruction and Evidence-Based Practice. ADI Press, Eugene

  • Cronbach L.J.: Designing Evaluations of Educational and Social Programs. Jossey-Bass, San Francisco (1982)

    Google Scholar 

  • Cronbach L.J., Meehl P.E.: Construct validity in psychological tests. Psychol. Bull. 52, 281–302 (1955)

    Article  Google Scholar 

  • Engelmann S.: Teaching Needy Kids in Our Backward System: 42 Years of Trying. ADI Press, Eugene (2007)

    Google Scholar 

  • Engelmann, Z.: Socrates on gold standard experiments. http://www.zigsite.com/PDFs/socrates5mm2.pdf (2009). Accessed 8 Dec 2011

  • Engelmann S.E., Carnine D.: Theory of Instruction: Principles and Applications. Irvington Publishers, New York (1982)

    Google Scholar 

  • Engelmann S.E., Engelmann K.E.: Impediments to scaling up effective comprehensive school reform models. In: Glennan, T.K., Bodilly, S.J., Galegher, J.R., Kerr, K.A. (eds.) Expanding the Reach of Education Reforms: Perspectives from Leaders in the Scale-up of Educational Interventions, pp. 107–133. Rand, Santa Monica (2004)

    Google Scholar 

  • Fayer H.: Place randomized trials: experimental tests of public policy. Ann. AAPS 599(May), 272–291 (2005)

    Google Scholar 

  • Fuchs L.S., Fuchs D., Hosp M.K., Jenkins J.R.: Oral reading fluency as an indicator of reading competence: a theoretical, empirical, and historical analysis. Sci. Stud. Read. 5(3), 239–256 (2001)

    Article  Google Scholar 

  • Glazerman S., Levy D.M., Myers D.: Nonexperimental versus experimental estimates of earnings impacts. Ann. AAPSS 589, 63–93 (2003)

    Article  Google Scholar 

  • Good R.H., Simmons D.C., Kame’enui E.J.: The importance and decision-making utility of a continuum of fluency-based indicators of foundational reading skills for third-grade high-stakes outcomes. Sci. Stud. Read. 5(3), 257–288 (2001)

    Article  Google Scholar 

  • Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London and New York: Routledge.

  • Heinsman D.T., Shadish W.R.: Assignment methods in experimentation: when do nonrandomized experiments approximate answers from randomized experiments?. Psychol. Methods 1, 154–169 (1996)

    Article  Google Scholar 

  • Huitt W.G., Monetti D.M., Hummel J.H.: Direct approach to instruction. In: Reigeluth, C., Carr-Chellman, A. (eds.) Instructional-Design Theories and Models: Volume III, Building a Common Knowledge Base, pp. 73–98. Lawrence Erlbaum, Mahwah (2009)

    Google Scholar 

  • Julnes G., Rog D.J.: Current federal policies and controversies over methodology in evaluation. N. Dir. Eval. 113(spring), 1–12 (2007)

    Article  Google Scholar 

  • Maxwell S.E.: Introduction to the special section on Campbell’s and Rubin’s conceptualizations of causality. Psychol. Methods 15, 1–2 (2010)

    Article  Google Scholar 

  • McMillan, J.H.: Randomized field trials and internal validity: not so fast my friend. Pract. Assess. Res. Eval. 12(15), http://pareonline.net/pdf/v12n15.pdf (2007). Accessed 8 Dec 2011

  • Millenson M.: Demanding Medical Excellence: Doctors and Accountability in the Information Age. University of Chicago Press, Chicago (1997)

    Google Scholar 

  • Miron, G.: The constructive use of existing data and research for evaluating charter schools. Paper prepared for the Symposium on the Use of School-Level Data for Evaluating Federal Education Programs, December 8–9, 2005, Washington (2005)

  • Morgan S.L., Winship C.: Counterfactuals and Causal Inference: Methods and Principles for Social Research. Cambridge University Press, New York (2007)

    Book  Google Scholar 

  • National Mathematics Advisory Panel (NMAP): Foundations for Success: The Final Report of the National Mathematics Advisory Panel. US Department of Education, Washington (2008)

    Google Scholar 

  • Noble J.H. Jr: Meta-analysis: methods, strengths, weaknesses, and political uses. J. Lab. Clin. Med. 147, 7–20 (2006)

    Article  Google Scholar 

  • Odom S.L., Brantlinger E., Gersten R., Horner R.H., Thompson B., Harris K.R.: Research in special education: scientific methods and evidence-based practices. Except. Child. 71, 137–148 (2005)

    Google Scholar 

  • Raudenbush S.W.: Advancing educational policy by advancing research on instruction. Am. Educ. Res. J. 45, 206–230 (2008)

    Article  Google Scholar 

  • Robinson, D.H.: Scientific research is programmatic. In: Scientifically-based Education Research and Federal Funding Agencies: The Case of the No Child Left Behind Legislation, pp 121–128. Information Age Publishing, Charlotte (2004)

  • Rubin D.B.: Reflections stimulated by the comments of Shadish (2010) and West and Thoemmes (2010). Psychol. Methods 15, 38–46 (2010)

    Article  Google Scholar 

  • Sampson R.J.: Gold standard myths: observations on the experimental turn in quantitative criminology. J. Quant. Criminol. 26, 489–500 (2006)

    Article  Google Scholar 

  • Schwandt T.A.: A diagnostic reading of scientifically based research for education. Educ. Theory 55, 285–305 (2005)

    Google Scholar 

  • Scriven, M.: The logic of causal investigations. Unpublished paper, Western Michigan University (n.d.)

  • Scriven, M.: Can we infer causation from cross-sectional data? National Academy of Sciences. http://www7.nationalacademies.org/bota/School-Level%20Data_Michael%20Scriven-Paper.pdf. (2005). Accessed 8 Dec 2011

  • Scriven M.: A summative evaluation of RCT methodology: and an alternative approach to causal research. J. MultiDiscipl. Eval. 5, 11–24 (2008)

    Google Scholar 

  • Shadish W.R.: Campbell and Rubin: a primer and comparison of their approaches to causal inference in field settings. Psychol. Methods 15, 3–17 (2010)

    Article  Google Scholar 

  • Shadish W.R., Cook T.D.: The renaissance of field experimentation in evaluating interventions. Ann. Rev. Psychol. 60, 607–629 (2009)

    Article  Google Scholar 

  • Shadish W.R., Cook T.D., Campbell D.T.: Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton Mifflin, Boston (2002)

    Google Scholar 

  • Shadish W.R., Clark M.H., Steiner P.M.: Can randomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignments. J. Am. Stat. Assoc. 103, 1334–1343 (2008)

    Article  Google Scholar 

  • Sherman L.W.: Misleading evidence and evidence-led policy: making social science more experimental. Ann. AAPSS 589(September), 6–19 (2003)

    Article  Google Scholar 

  • Slavin R.E.: What works? Issues in synthesizing educational program evaluations. Educ. Res. 37, 5–14 (2008)

    Google Scholar 

  • Sloane F.: Randomized trials in mathematics education: recalibrating the proposed high watermark. Educ. Res. 37, 624–630 (2008a)

    Google Scholar 

  • Sloane F.: Through the looking glass: experiments, quasi-experiments, and the medical model. Educ. Res. 37, 41–46 (2008b)

    Google Scholar 

  • St. Pierre E.A.: Scientifically based research in education: epistemology and Ethics. Adult Educ. Quart. 56, 239–266 (2006)

    Article  Google Scholar 

  • Towne L., Wise L.L., Winters T.M.: Advancing Scientific Research in Education. National Academies Press, Washington (2005)

    Google Scholar 

  • U. S. Department of Education: Improving Teacher Quality State Grants, ESEA Title II, Part A, Non-Regulatory Guidance. US Department of Education, Washington, October 5 (2006) http://www2.ed.gov/programs/teacherqual/guidance.pdf. Accessed 8 Dec 2011

  • Weisburd D., Lum C.M., Petrosino A.: Does research design affect study outcomes in criminal justice?. Ann. AAPSS 578, 50–70 (2001)

    Article  Google Scholar 

  • West S.G., Thoemmes F.: Campbell’s and Rubin’s perspectives on causal inference. Psychol. Methods 15, 18–37 (2010)

    Article  Google Scholar 

  • What Works Clearinghouse: Procedures and Standards Handbook (Version 2.0). http://ies.ed.gov/ncee/wwc/pdf/wwc_procedures_v2_standards_handbook.pdf (2008). Accessed 8 Dec 2011

  • Whitehurst, G.J.: The Institute of Education Sciences: new wine, new bottles. Paper presented at the 2003 Annual Meeting of the American Educational Research Association, April 22 (2003)

  • Williams B.A.: Perils of Evidence-Based Medicine. Perspect. Biol. Med. 53, 106–120 (2010)

    Article  Google Scholar 

  • Winship C., Morgan S.L.: The estimation of causal effects from observational data. Ann. Rev. Sociol. 25, 659–706 (1999)

    Article  Google Scholar 

  • Yin R.K., Davis D.: Adding new dimensions to case study evaluations: the case of evaluating comprehensive reforms. N. Dir. Eval. 10, 75–93 (2007)

    Article  Google Scholar 

  • Zdep S.M., Irvine S.H.: A reverse Hawthorne effect in educational evaluation. J. School Psychol. 8, 89–95 (1970)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jean Stockard.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Stockard, J. Merging the accountability and scientific research requirements of the No Child Left Behind Act: using cohort control groups. Qual Quant 47, 2225–2257 (2013). https://doi.org/10.1007/s11135-011-9652-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11135-011-9652-5

Keywords

Navigation