Skip to main content

Part of the book series: Kluwer International Handbooks of Education ((SIHE,volume 9))

  • 2797 Accesses

Abstract

In the simplest randomized field trial (RFT), individuals are randomly assigned to one of two or more groups, each group being given a different educational intervention that purports to improve the achievement level of children. The groups so composed do not differ systematically. Roughly speaking, the groups are equivalent.

This paper is abbreviated and modified from Boruch (1998) and depends heavily on Boruch (1997). The Mosteller and Boruch (2001) book contains papers by other experts on specific aspects of randomized trials.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 749.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 949.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 949.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Barnett, W.S. (1985). Benefit-cost analysis of the Perry Preschool program and its long-term effects. Educational Evaluation and Policy Analysis, 7, 333–342.

    Google Scholar 

  • Bloom, H.S. (1990). Back to work: Testing reemployment services for displaced workers. Kalamazoo, MI: W.E. Upjohn Institute for Employment Research.

    Google Scholar 

  • Boruch, R.F. (1994). The future of controlled experiments: A briefing. Evaluation Practice, 15, 265–274.

    Article  Google Scholar 

  • Boruch, R.F. (1997). Randomized controlled experiments for planning and evaluation: A practical guide. Thousand Oaks, CA: Sage.

    Google Scholar 

  • Boruch, R.F. (1998). Randomized controlled experiments for evaluation and planning. In L. Bickman & D. Rog (Eds.), Handbook of applied social research methods (pp. 161–191). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Boruch, R.F, & Foley, E. (2000). The honestly experimental society: Sites and other entities as the units of allocation and analysis in randomized experiments. In L. Bickman (Ed.), Validity and social experimentation: Donald T. Campbell’s legacy (pp. 193–238). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Burghardt, J., & Gordon, A. (1990). More jobs and higher pay: How an integrated program compares with traditional programs. New York: Rockefeller Foundation.

    Google Scholar 

  • Campbell, D.T., & Stanley, J.C. (1966). Experimental and quasi-experimental designs for research. Chicago: Rand McNally.

    Google Scholar 

  • Chalmers, T.C., Smith, H., Blackburn, B., Silverman, B., Schroeder, B., Reitman, D., & Ambroz, A. (1981). A method for assessing the quality of a randomized controlled trial. Controlled Clinical Trials, 2(1), 31–50.

    Article  Google Scholar 

  • Cochran, W.G. (1983). Planning and analysis of observational studies (L.E. Moses & F. Mosteller, Eds.). New York: John Wiley.

    Chapter  Google Scholar 

  • Cook, T.D., & Campbell, D.T. (1979). Quasi-experimentalion: Design and analysis issues for field settings. Chicago: Rand McNally.

    Google Scholar 

  • Cordray, D.S., & Fischer, R.L. (1994). Synthesizing evaluation findings. In J.S. Wholey, H.H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 198–231). San Francisco: Jossey-Bass.

    Google Scholar 

  • Cottingham, P.H. (1991).Unexpected lessons: Evaluation of job-training programs for single mothers. In R.S. Turpin, & J.M. Sinacore (Eds.) Multisite Evaluations. New Directions for Program Evaluation, 50, (pp. 59–70).

    Google Scholar 

  • Crain, R.L., Heebner, A.L., & Si, Y. (1992). The effectiveness of New York City’s career magnet schools: An evaluation of ninth grade performance using an experimental design. Berkeley, CA: National Center for Research in Vocational Education.

    Google Scholar 

  • Dennis, M.L. (1988). Implementing randomized field experiments: An analysis of criminal and civil justice research. Unpublished Ph.D. dissertation, Northwestern University, Department of Psychology.

    Google Scholar 

  • Dolittle, F., & Traeger, L. (1990). Implementing the national JTPA study. New York: Manpower Demonstration Research Corporation.

    Google Scholar 

  • Donner, S., & Klar, N. (2000) Design and Analysis of Cluster Randomization Trials in Health Research. New York: Oxford University Press.

    Google Scholar 

  • Dynarski, M., Gleason, P., Rangarajan, A., & Wood, R. (1995). Impacts of dropout prevention programs. Princeton, NJ: Mathematica Policy Research.

    Google Scholar 

  • Ellickson, P.L., & Bell, R.M. (1990). Drug prevention in junior high: A multi-site longitudinal test. Science, 247, 1299–1306.

    Article  Google Scholar 

  • Fantuzzo, J.F., Jurecic, L., Stovall, A., Hightower, A.D., Goins, C., & Schachtel, K.A. (1988). Effects of adult and peer social initiations on the social behavior of withdrawn, maltreated preschool children. Journal of Consulting and Clinical Psychology, 56(1), 34–39.

    Article  Google Scholar 

  • Farrington, D.P. (1983). Randomized experiments on crime and justice. Crime and Justice: Annual Review of Research, 4, 257–308.

    Article  Google Scholar 

  • Federal Judicial Center. (1983). Social experimentation and the law. Washington, DC: Author.

    Google Scholar 

  • Finn, J.D., & Achilles, C.M. (1990). Answers and questions about class size: A statewide experiment. American Education Research Journal, 27, 557–576.

    Google Scholar 

  • Friedman, L.M., Furberg, C.D., & DeMets, D.L. (1985). Fundamentals of clinical trials. Boston: John Wright.

    Google Scholar 

  • Fuchs, D., Fuchs, L.S., Mathes, P.G., & Simmons, D.C. (1997). Peer-assisted learning strategies: Making classrooms more responsive to diversity. American Educational Research Journal, 34(1), 174–206.

    Google Scholar 

  • Gramlich, E.M. (1990). Guide to cost benefit analysis. Englewood Cliffs, New Jersey: Prentice Hall.

    Google Scholar 

  • Granger, R.C., & Cytron, R. (1999). Teenage parent programs. Evaluation Review, 23(2), 107–145.

    Google Scholar 

  • Gueron, J.M., & Pauly, E. (1991). From welfare to work. New York: Russell Sage Foundation.

    Google Scholar 

  • Hedrick, T.E., Bickman, L., & Rog, D. (1993). Applied research design: A practical guide. Newbury Park, CA: Sage.

    Google Scholar 

  • Howell, W.G., Wolf, P.J., Peterson, P.P., & Campbell, D.E. (2001). Vouchers in New York, Dayton, and D.C. Education Matters, 1(2), 46–54.

    Google Scholar 

  • Julnes G., & Mohr, L.B. (1989). Analysis of no difference findings in evaluation research. Evaluation Review, 13, 628–655.

    Article  Google Scholar 

  • Kato, L.Y., & Riccio, J.A. (2001). Building new partnerships for employment: Collaboration among agencies and public housing residents in the Jobs Plus demonstration. New York: Manpower Demonstration Research Corporation.

    Google Scholar 

  • Light, R.J., & Pillemer, D.B. (1984). Summing up: The science of reviewing research. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Lipsey, M.W. (1990). Design sensitivity: Statistical power for experimental design. Newbury Park, CA: Sage.

    Google Scholar 

  • Lipsey, M.W. (1992). Juvenile delinquency treatment: A meta-analysis inquiry into the variability of effects. In T.D. Cook, H.M. Cooper, D.S. Cordray, H. Hartmann, L.V. Hedges, R.J. Light, T. Louis, & F. Mosteller, (Eds.), Meta-analysis for explanation: A casebook (pp. 83–127). New York: Russell Sage Foundation.

    Google Scholar 

  • Lipsey, M.W. (1993). Theory as method: Small theories of treatments. In L.B. Sechrest & Scott (Eds.), Understanding causes and generalizing about them (pp. 5–38). San Francisco: Jossey-Bass.

    Google Scholar 

  • Mosteller, E, (1986). Errors: Nonsampling errors. In W.H. Kruskal & J.M. Tanur (Eds.), International Encyclopedia of Statistics (Vol. 1, pp. 208–229). New York: Free Press.

    Google Scholar 

  • Mosteller, F. (1995). The Tennessee study of class size in the early school grades. The Future of Children, 5, 113–127.

    Article  Google Scholar 

  • Mosteller, F., & Boruch, R.F. (2001). Evidence matters: Randomized trials in education research. Washington, DC: Brookings Institution Press.

    Google Scholar 

  • Mosteller, F., Light, R.J., & Sachs, J. (1995). Sustained inquiry in education: Lessons from ability grouping and class size. Cambridge, MA: Harvard University, Center for Evaluation of the Program on Initiatives for Children.

    Google Scholar 

  • Murray, D.M. (1998) Design and analysis of group randomized trials. New York: Oxford University Press.

    Google Scholar 

  • Myers. D., & Schirm, A. (1999) The impacts of Upward Bound: Final report of phase 1 of the national evaluation. Princeton, NJ: Mathematica Policy Research.

    Google Scholar 

  • Myers, D., Peterson, P., Mayer, D., Chou, J., & Howell, W. (2000). School choice in New York City after two years: An evaluation of the School Choice Scholarships Program: Interim report. Princeton, NJ: Mathematica Policy Research.

    Google Scholar 

  • Nave, B., Miech, E.J., & Mosteller, F. (2000). The role of field trials in evaluating school practices: A rare design. In D.L. Stufflebeam, G.F. Madaus, & T. Kellaghan (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (2nd ed.) (pp. 145–161). Boston, MA: Kluwer Academic Publishers.

    Google Scholar 

  • Petrosino, A., Boruch, R., Rounding, C., McDonald, S., & Chalmers, I. (2000). The Campbell Collaboration Social, Psychological, Educational, and Criminological Trials Registry (C2-SPECTR) to facilitate the preparation and maintenance of systematic reviews of social and educational interventions. Evaluation and Research in Education (UK), 14(3-4), 206–219.

    Article  Google Scholar 

  • Petrosino, A.J., Turpin-Petrosino, C., & Finkenauer, J.O. (2000). Well meaning programs can have harmful effects. Lessons from “Scared Straight” experiments. Crime and Delinquency, 42(3), 354–379.

    Article  Google Scholar 

  • Riecken, H.W., Boruch, R.F, Campbell, D.T., Caplan, N., Glennau, T.K., Pratt, J.W., Rees, A., & Williams, W.W. (1974). Social experimentation: A method for planning and evaluating social programs. New York: Academic Press.

    Google Scholar 

  • Rosenbaum, P.R. (1995) Observational studies. New York: Springer Verlag.

    Google Scholar 

  • Social Research and Demonstration Corporation. (Spring, 2001). Deouvrir les appproches etticicaces: L’experimentation et la recherché en politique sociale a la SRSA, 1(2).

    Google Scholar 

  • St. Pierre, R., Swartz, J., Murray, S., Deck, D., & Nickel, P. (1995). National evaluation of Even Start Family Literacy Program (USDE Contract LC 90062001). Cambridge, MA: Abt Associates.

    Google Scholar 

  • St. Pierre, R., & Others. (1998). The comprehensive child development experiment. Cambridge, MA: Abt Associates.

    Google Scholar 

  • Schuerman, J.R., Rzepnicki, T.L., & Litteil, J. (1994). Putting families first: An experiment in family preservation. New York: Aldine de Gruyter.

    Google Scholar 

  • Schweinhart, L.J., Barnes, H.V., & Weikert, D.P. (1993). Significant benefits: The High Scope Perry Preschool study, through age 27. Ypsilanti, ML High/Scope Press.

    Google Scholar 

  • Sieber, J.E. (1992). Planning ethically responsible research: A guide for students and institutional review boards. Newbury Park, CA: Sage.

    Google Scholar 

  • Standards of Reporting Trials Group. (1994). A proposal for structural reporting of randomized clinical trials. Journal of the American Medical Association, 272, 1926–1931.

    Article  Google Scholar 

  • Stanley, B., & Sieber, J.E. (Eds.). (1992). Social research on children and adolescents: Ethical issues. Newbury Park, CA: Sage.

    Google Scholar 

  • Taroyan, T., Roberts, I., & Oakley, A. (2000). Randomisation and resource allocations: A missed opportunity for evaluating health-care and social interventions. Journal of Medical Ethics, 26, 319–322.

    Article  Google Scholar 

  • U.S. General Accounting Office. (1992). Cross-design synthesis: A new strategy for medical effectiveness research (Publication No. GAO/PEMD-92-18). Washington, DC: Government Printing Office.

    Google Scholar 

  • U.S. General Accounting Office. (1994). Breast conservation versus mastectomy: Patient survival data in daily medical practice and in randomized studies (Publication No. PEMD-95-9). Washington, DC: Government Printing Office.

    Google Scholar 

  • Yeaton, W.H., & Sechrest, L. (1986). Use and misuse of no difference findings in eliminating threats to validity. Evaluation Review, 10, 836–852.

    Article  Google Scholar 

  • Yeaton, W.H., & Sechrest. L. (1987). No difference research. New Directions for Program Evaluation, 34, 67–82.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Kluwer Academic Publishers

About this chapter

Cite this chapter

Boruch, R.F. (2003). Randomized Field Trials in Education. In: Kellaghan, T., Stufflebeam, D.L. (eds) International Handbook of Educational Evaluation. Kluwer International Handbooks of Education, vol 9. Springer, Dordrecht. https://doi.org/10.1007/978-94-010-0309-4_9

Download citation

  • DOI: https://doi.org/10.1007/978-94-010-0309-4_9

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-1-4020-0849-8

  • Online ISBN: 978-94-010-0309-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics