Abstract
In the simplest randomized field trial (RFT), individuals are randomly assigned to one of two or more groups, each group being given a different educational intervention that purports to improve the achievement level of children. The groups so composed do not differ systematically. Roughly speaking, the groups are equivalent.
This paper is abbreviated and modified from Boruch (1998) and depends heavily on Boruch (1997). The Mosteller and Boruch (2001) book contains papers by other experts on specific aspects of randomized trials.
Chapter PDF
References
Barnett, W.S. (1985). Benefit-cost analysis of the Perry Preschool program and its long-term effects. Educational Evaluation and Policy Analysis, 7, 333–342.
Bloom, H.S. (1990). Back to work: Testing reemployment services for displaced workers. Kalamazoo, MI: W.E. Upjohn Institute for Employment Research.
Boruch, R.F. (1994). The future of controlled experiments: A briefing. Evaluation Practice, 15, 265–274.
Boruch, R.F. (1997). Randomized controlled experiments for planning and evaluation: A practical guide. Thousand Oaks, CA: Sage.
Boruch, R.F. (1998). Randomized controlled experiments for evaluation and planning. In L. Bickman & D. Rog (Eds.), Handbook of applied social research methods (pp. 161–191). Thousand Oaks, CA: Sage.
Boruch, R.F, & Foley, E. (2000). The honestly experimental society: Sites and other entities as the units of allocation and analysis in randomized experiments. In L. Bickman (Ed.), Validity and social experimentation: Donald T. Campbell’s legacy (pp. 193–238). Thousand Oaks, CA: Sage.
Burghardt, J., & Gordon, A. (1990). More jobs and higher pay: How an integrated program compares with traditional programs. New York: Rockefeller Foundation.
Campbell, D.T., & Stanley, J.C. (1966). Experimental and quasi-experimental designs for research. Chicago: Rand McNally.
Chalmers, T.C., Smith, H., Blackburn, B., Silverman, B., Schroeder, B., Reitman, D., & Ambroz, A. (1981). A method for assessing the quality of a randomized controlled trial. Controlled Clinical Trials, 2(1), 31–50.
Cochran, W.G. (1983). Planning and analysis of observational studies (L.E. Moses & F. Mosteller, Eds.). New York: John Wiley.
Cook, T.D., & Campbell, D.T. (1979). Quasi-experimentalion: Design and analysis issues for field settings. Chicago: Rand McNally.
Cordray, D.S., & Fischer, R.L. (1994). Synthesizing evaluation findings. In J.S. Wholey, H.H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 198–231). San Francisco: Jossey-Bass.
Cottingham, P.H. (1991).Unexpected lessons: Evaluation of job-training programs for single mothers. In R.S. Turpin, & J.M. Sinacore (Eds.) Multisite Evaluations. New Directions for Program Evaluation, 50, (pp. 59–70).
Crain, R.L., Heebner, A.L., & Si, Y. (1992). The effectiveness of New York City’s career magnet schools: An evaluation of ninth grade performance using an experimental design. Berkeley, CA: National Center for Research in Vocational Education.
Dennis, M.L. (1988). Implementing randomized field experiments: An analysis of criminal and civil justice research. Unpublished Ph.D. dissertation, Northwestern University, Department of Psychology.
Dolittle, F., & Traeger, L. (1990). Implementing the national JTPA study. New York: Manpower Demonstration Research Corporation.
Donner, S., & Klar, N. (2000) Design and Analysis of Cluster Randomization Trials in Health Research. New York: Oxford University Press.
Dynarski, M., Gleason, P., Rangarajan, A., & Wood, R. (1995). Impacts of dropout prevention programs. Princeton, NJ: Mathematica Policy Research.
Ellickson, P.L., & Bell, R.M. (1990). Drug prevention in junior high: A multi-site longitudinal test. Science, 247, 1299–1306.
Fantuzzo, J.F., Jurecic, L., Stovall, A., Hightower, A.D., Goins, C., & Schachtel, K.A. (1988). Effects of adult and peer social initiations on the social behavior of withdrawn, maltreated preschool children. Journal of Consulting and Clinical Psychology, 56(1), 34–39.
Farrington, D.P. (1983). Randomized experiments on crime and justice. Crime and Justice: Annual Review of Research, 4, 257–308.
Federal Judicial Center. (1983). Social experimentation and the law. Washington, DC: Author.
Finn, J.D., & Achilles, C.M. (1990). Answers and questions about class size: A statewide experiment. American Education Research Journal, 27, 557–576.
Friedman, L.M., Furberg, C.D., & DeMets, D.L. (1985). Fundamentals of clinical trials. Boston: John Wright.
Fuchs, D., Fuchs, L.S., Mathes, P.G., & Simmons, D.C. (1997). Peer-assisted learning strategies: Making classrooms more responsive to diversity. American Educational Research Journal, 34(1), 174–206.
Gramlich, E.M. (1990). Guide to cost benefit analysis. Englewood Cliffs, New Jersey: Prentice Hall.
Granger, R.C., & Cytron, R. (1999). Teenage parent programs. Evaluation Review, 23(2), 107–145.
Gueron, J.M., & Pauly, E. (1991). From welfare to work. New York: Russell Sage Foundation.
Hedrick, T.E., Bickman, L., & Rog, D. (1993). Applied research design: A practical guide. Newbury Park, CA: Sage.
Howell, W.G., Wolf, P.J., Peterson, P.P., & Campbell, D.E. (2001). Vouchers in New York, Dayton, and D.C. Education Matters, 1(2), 46–54.
Julnes G., & Mohr, L.B. (1989). Analysis of no difference findings in evaluation research. Evaluation Review, 13, 628–655.
Kato, L.Y., & Riccio, J.A. (2001). Building new partnerships for employment: Collaboration among agencies and public housing residents in the Jobs Plus demonstration. New York: Manpower Demonstration Research Corporation.
Light, R.J., & Pillemer, D.B. (1984). Summing up: The science of reviewing research. Cambridge, MA: Harvard University Press.
Lipsey, M.W. (1990). Design sensitivity: Statistical power for experimental design. Newbury Park, CA: Sage.
Lipsey, M.W. (1992). Juvenile delinquency treatment: A meta-analysis inquiry into the variability of effects. In T.D. Cook, H.M. Cooper, D.S. Cordray, H. Hartmann, L.V. Hedges, R.J. Light, T. Louis, & F. Mosteller, (Eds.), Meta-analysis for explanation: A casebook (pp. 83–127). New York: Russell Sage Foundation.
Lipsey, M.W. (1993). Theory as method: Small theories of treatments. In L.B. Sechrest & Scott (Eds.), Understanding causes and generalizing about them (pp. 5–38). San Francisco: Jossey-Bass.
Mosteller, E, (1986). Errors: Nonsampling errors. In W.H. Kruskal & J.M. Tanur (Eds.), International Encyclopedia of Statistics (Vol. 1, pp. 208–229). New York: Free Press.
Mosteller, F. (1995). The Tennessee study of class size in the early school grades. The Future of Children, 5, 113–127.
Mosteller, F., & Boruch, R.F. (2001). Evidence matters: Randomized trials in education research. Washington, DC: Brookings Institution Press.
Mosteller, F., Light, R.J., & Sachs, J. (1995). Sustained inquiry in education: Lessons from ability grouping and class size. Cambridge, MA: Harvard University, Center for Evaluation of the Program on Initiatives for Children.
Murray, D.M. (1998) Design and analysis of group randomized trials. New York: Oxford University Press.
Myers. D., & Schirm, A. (1999) The impacts of Upward Bound: Final report of phase 1 of the national evaluation. Princeton, NJ: Mathematica Policy Research.
Myers, D., Peterson, P., Mayer, D., Chou, J., & Howell, W. (2000). School choice in New York City after two years: An evaluation of the School Choice Scholarships Program: Interim report. Princeton, NJ: Mathematica Policy Research.
Nave, B., Miech, E.J., & Mosteller, F. (2000). The role of field trials in evaluating school practices: A rare design. In D.L. Stufflebeam, G.F. Madaus, & T. Kellaghan (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (2nd ed.) (pp. 145–161). Boston, MA: Kluwer Academic Publishers.
Petrosino, A., Boruch, R., Rounding, C., McDonald, S., & Chalmers, I. (2000). The Campbell Collaboration Social, Psychological, Educational, and Criminological Trials Registry (C2-SPECTR) to facilitate the preparation and maintenance of systematic reviews of social and educational interventions. Evaluation and Research in Education (UK), 14(3-4), 206–219.
Petrosino, A.J., Turpin-Petrosino, C., & Finkenauer, J.O. (2000). Well meaning programs can have harmful effects. Lessons from “Scared Straight” experiments. Crime and Delinquency, 42(3), 354–379.
Riecken, H.W., Boruch, R.F, Campbell, D.T., Caplan, N., Glennau, T.K., Pratt, J.W., Rees, A., & Williams, W.W. (1974). Social experimentation: A method for planning and evaluating social programs. New York: Academic Press.
Rosenbaum, P.R. (1995) Observational studies. New York: Springer Verlag.
Social Research and Demonstration Corporation. (Spring, 2001). Deouvrir les appproches etticicaces: L’experimentation et la recherché en politique sociale a la SRSA, 1(2).
St. Pierre, R., Swartz, J., Murray, S., Deck, D., & Nickel, P. (1995). National evaluation of Even Start Family Literacy Program (USDE Contract LC 90062001). Cambridge, MA: Abt Associates.
St. Pierre, R., & Others. (1998). The comprehensive child development experiment. Cambridge, MA: Abt Associates.
Schuerman, J.R., Rzepnicki, T.L., & Litteil, J. (1994). Putting families first: An experiment in family preservation. New York: Aldine de Gruyter.
Schweinhart, L.J., Barnes, H.V., & Weikert, D.P. (1993). Significant benefits: The High Scope Perry Preschool study, through age 27. Ypsilanti, ML High/Scope Press.
Sieber, J.E. (1992). Planning ethically responsible research: A guide for students and institutional review boards. Newbury Park, CA: Sage.
Standards of Reporting Trials Group. (1994). A proposal for structural reporting of randomized clinical trials. Journal of the American Medical Association, 272, 1926–1931.
Stanley, B., & Sieber, J.E. (Eds.). (1992). Social research on children and adolescents: Ethical issues. Newbury Park, CA: Sage.
Taroyan, T., Roberts, I., & Oakley, A. (2000). Randomisation and resource allocations: A missed opportunity for evaluating health-care and social interventions. Journal of Medical Ethics, 26, 319–322.
U.S. General Accounting Office. (1992). Cross-design synthesis: A new strategy for medical effectiveness research (Publication No. GAO/PEMD-92-18). Washington, DC: Government Printing Office.
U.S. General Accounting Office. (1994). Breast conservation versus mastectomy: Patient survival data in daily medical practice and in randomized studies (Publication No. PEMD-95-9). Washington, DC: Government Printing Office.
Yeaton, W.H., & Sechrest, L. (1986). Use and misuse of no difference findings in eliminating threats to validity. Evaluation Review, 10, 836–852.
Yeaton, W.H., & Sechrest. L. (1987). No difference research. New Directions for Program Evaluation, 34, 67–82.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Kluwer Academic Publishers
About this chapter
Cite this chapter
Boruch, R.F. (2003). Randomized Field Trials in Education. In: Kellaghan, T., Stufflebeam, D.L. (eds) International Handbook of Educational Evaluation. Kluwer International Handbooks of Education, vol 9. Springer, Dordrecht. https://doi.org/10.1007/978-94-010-0309-4_9
Download citation
DOI: https://doi.org/10.1007/978-94-010-0309-4_9
Publisher Name: Springer, Dordrecht
Print ISBN: 978-1-4020-0849-8
Online ISBN: 978-94-010-0309-4
eBook Packages: Springer Book Archive