Place Randomized Trials

  • Robert Boruch
  • David Weisburd
  • Richard Berk


An important and emerging vehicle for generating dependable evidence about what works or what does not work falls under the rubric of place randomized trials. In criminology, for instance, such a trial might involve identifying a sample of high crime hot spots and then randomly allocating the hot spots, the places, to different police or community interventions. The random assignment assures a fair comparison among the interventions, and when the analysis is correct, a legitimate statistical statement of one’s confidence in the resulting estimates of their effectiveness. See, for example, Weisburd et al. (2008) for illustrations of such trials and other references in what follows.


Block Group Random Allocation Housing Development Average Treatment Effect Step Wedge Design 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. Berk R (2005) Randomized experiments as the bronze standard. J Exp Criminol 1(4):417–433CrossRefGoogle Scholar
  2. Berk R, Boruch R, Chambers D, Rossi P, Witte A (1985) Social policy experimentation: a position paper. Eval Rev 9(4):387–430CrossRefGoogle Scholar
  3. Bertrand M, Duflo E, Mullainathan S (2002) How much should we trust differences in differences estimates? Working Paper 8841. National Bureau of Economic Research, Cambridge, MA.Google Scholar
  4. Bloom HS (2005) Learning more from social experiments: evolving analytic approaches. Russell Sage Foundation, New YorkGoogle Scholar
  5. Bloom HS, Riccio JA (2005) Using place random assignment and comparative interrupted time-series analysis to evaluate the jobs-plus employment program for public housing residents. Ann Am Acad Pol Soc Sci 599:19–51CrossRefGoogle Scholar
  6. Boruch RF (1997) Randomized experiments for planning and evaluation. Thousand Oaks, California, Sage PublicationsGoogle Scholar
  7. Boruch RF (2007) Encouraging the flight of error: ethical standards, evidence standards, and randomized trials. New Dir Eval 133:55–73CrossRefGoogle Scholar
  8. Boruch RF, May H, Turner H, Lavenberg J, Petrosino A, deMoya D, Grimshaw J, Foley E (2004) Estimating the effects of interventions that are deployed in many places. Am Behav Sci 47(5):575–608CrossRefGoogle Scholar
  9. Botvin G, Griffin K, Diaz T, Scheier L, Williams C, Epstein J (2000) Preventing elicit drug use in adolescents. Addict Behav 25(5):769–774CrossRefGoogle Scholar
  10. Braga A (2001) The effects of hot spots policing on crime. In: Farrington DF, Welsh BC (eds) What works in preventing crime? Special issue. Ann Am Acad Pol Soc Sci, vol 578, pp 104–125Google Scholar
  11. Brown CH, Wyaman PA, Guo J, Pena J (2006) Dynamic wait-listed designs for randomized trials: new designs for prevention of youth suicide. Clinical Trials 3:259–271CrossRefGoogle Scholar
  12. Brown C, Lilford R (2006) The step wedge trial design: a systematic review. BMC Med Res Methodol 6:54. Available at:–2288/6/54 Google Scholar
  13. Bryk AS, Raudenbush SW (2002) Hierarchical linear models. Sage, Thousand Oaks, CA.Google Scholar
  14. Campbell MK, Elbourne DR, Altman DG (2004) CONSORT statement: extension to cluster randomized trials. BMJ 328:702–708CrossRefGoogle Scholar
  15. Clarke RV, Cornish D (1972) The controlled trial in institutional research: paradigm or pitfall for penal evaluators. London, Her Majesty’s Stationary OfficeGoogle Scholar
  16. Cook T, Shadish W, Wong V (2008) Three conditions under which experiments and observational studies produce comparable causal estimates: new findings from within study comparisons. J Policy Anal Manage 27(4): 724–750CrossRefGoogle Scholar
  17. Davis R, Taylor B (1997) A proactive response to family violence: the results of a randomized experiment. Criminology 35(2):307–333CrossRefGoogle Scholar
  18. De Allegri M et al (2008) Step cluster randomized community-based trials: an application to the study of the impact of community health insurance. Health Res Policy Syst 6:10. Available at:
  19. Donner A, Klar N (2000) Design and analysis of cluster randomized trials in health research. Arnold, LondonGoogle Scholar
  20. Evans I, Thornton H, Chalmers I (2006) Testing treatments: better research for healthcare. British Library, LondonGoogle Scholar
  21. Edgington E, Onghena P (2007) Randomization tests (Fourth Edition), New York, Chapman and Hall/CRCGoogle Scholar
  22. Farrington DP (2003) Methodological standards for evaluation research. Ann Am Acad Pol Soc Sci 587(1):49–68CrossRefGoogle Scholar
  23. Flay BR, Collins LM (2005) Historical review of school based randomized trials for evaluating problem behavior prevention programs. Ann Am Acad Pol Soc Sci 599:115–146CrossRefGoogle Scholar
  24. Freedman DA (2006) Statistical models for causation: what inferential leverage do they provide. Eval Rev 30(5): 691–713CrossRefGoogle Scholar
  25. Freedman DA (2008a) On regression adjustments to experimental data. Adv Appl Math 40:80–193CrossRefGoogle Scholar
  26. Freedman DA (2008b) Randomization does not justify logistic regression. Stat Sci 23:237–249CrossRefGoogle Scholar
  27. Garet M et al (2008) The impact of two professional development interventions on early reading instruction and achievement. Institute for Education Sciences, US Department of Education and American Institutes for Research, Washington DCGoogle Scholar
  28. Gill C (2009) Reporting in criminological journals. Report for seminar on advanced topics in experimental design. Graduate School of Education and Criminology Department, University of Pennsylvania, Philadelphia, PAGoogle Scholar
  29. Graham K, Osgood D, Zibrowshi E, Purcell J, Glicksman K, Leonared K, Perneanen K, Alitz R, Toomey T (2004) The effect of the safer bars programme on physical aggression in bars: results of a randomized trial. Drug and Alcohol Review 23:31–41CrossRefGoogle Scholar
  30. Greevy R, Lu D, Silber JH, Rosenbaum P (2004) Optimal multivariate matching before randomization. Biostatistics 5(2):263–275CrossRefGoogle Scholar
  31. Grimshaw J, Eccles M, Cambpell M, Elbroune D (2005) Cluster randomized trials of professional and organizational behavior change interventions in health settings. Annals of the American Academy of Political and Social Science 599:71–93CrossRefGoogle Scholar
  32. Gulematova-Swan M (2009) Evaluating the impact of conditional cash transfer programs on adolescent decisions about marriage and fertility: the case of oportunidades. PhD Dissertation, Department of Economics, University of PennsylvaniaGoogle Scholar
  33. Gueron JM (1985) The demonstration of state/welfare initiatives. In: Boruch RF, Wothke W (eds) Randomization and field experimentation. Special issue of new directions for progam evaluation. Number 28. Jossey Bass, San Francisco, pp 5–14Google Scholar
  34. Heinsman D, Shadish W (1998) Assignment methods in experimentation: when do nonrandomized experiments approximate answers from randomized experiments? Psychol Methods 1(2):154–169CrossRefGoogle Scholar
  35. Hussey M, Huges J (2006) Design and analysis of stepped wedge cluster designs. Contemporary Clinical Trials, doi: 1.1016/j.cct2006.05.007Google Scholar
  36. Imai K, King G, Nall C (2009) The essential role of pair matching in cluster randomized experiments, with application to the Mexican universal health insurance evaluation. Stat Sci 24(1):29–53CrossRefGoogle Scholar
  37. Kelling G, Pate A, Dieckmann D, Brown C (1974) The Kanasas city preventive police patrol experiment. Washington DC, The Police FoundationGoogle Scholar
  38. Kempthorne (1952) The design and analysis of experiments. New York: Wiley, and Malabar Florida: Robert E. Krieger Publishers (Reprint 1983)Google Scholar
  39. Konstantopoulos S (2009) Incorporating cost in power analysis for three-level cluster-randomized designs. Eval Rev 33(4):335–357CrossRefGoogle Scholar
  40. Kunz R, Oxman A (1998) The unpredictability paradox: review of the empirical comparisons of randomized and non-randomized clinical trials. BMJ 317:1185–1190Google Scholar
  41. Leviton LC, Horbar JD (2005) Cluster randomized trials for evaluation of strategies to promote evidence based practice in perinatla and neonatal medicine. Annals of the American Academy of Political and Social Science 599:94–114CrossRefGoogle Scholar
  42. Lippmann W (1963) The young criminals. In: Rossiter C, Lare J (eds) The essential Lippmann. Random House, New York, Originally published 1933Google Scholar
  43. Lipsey M, Wilson D (1993) The efficacy of psychological, educational, and behavioral treatment: confirmation from meta-analysis. Am Psychol 48:1181–1209CrossRefGoogle Scholar
  44. Loesel F, Koferl P (1989) Evaluation research on correctional treatment in West Germany: a meta-analysis. In: Wegerner H, Loesel F, Haisch J (eds) Criminal behavior and the justice system: psychological perspectives. Springer, New York, pp 334–355Google Scholar
  45. Mazzarole L, Price J, Roehl J (2000) Civil remedies and drug control: a randomized trial in Oakland California. Eval Rev 24(2):212–241CrossRefGoogle Scholar
  46. Merlino J (2009) The 21st century research and development center on cognition and science instruction. Award from the Institute of Education Sciences, U.S. Department of Education. Award Number R305C080009. Author, Conshahocken, PAGoogle Scholar
  47. Moher D, Shulz KF, Moher D, Egger M, Davidoff F, Elbourne D et al (2001) The CONSORT statement: revised recommendations for improving the quality of reports on parallel group randomized trials. Lancet 387: 1191–2004CrossRefGoogle Scholar
  48. Murray D (1998) Design and analysis of group randomized trials. Oxford University Press, New YorkGoogle Scholar
  49. Neter J, Kutner M, Nachtsheim C, Wasserman W (2001) Applied linear statistical models. McGraw Hill, New YorkGoogle Scholar
  50. Parker SW, Teruel GM (2005) Randomization and social program evaluation: the case of progresa. Annals of the American Academy of Political and Social Science 599(May):199–219Google Scholar
  51. Perry AE, Johnson M (2008) Applying the Consolidated Standards of Reporting Trials (CONSORT) to studies of mental health for juvenile offenders: a research note. J Exp Criminol 4: 165–185CrossRefGoogle Scholar
  52. Perry AE, Weisburd D, Hewitt C (2009) Are criminologists reporting experiments in ways that allow us to access them? Unpublished Manuscript/Report. Available from the Authors: Center for Evidence Based Crime Policy (CEBCP), George Mason University, Virginia USGoogle Scholar
  53. Planning Committee on Protecting Student Records and Facilitating Education Research (2009) Protecting student records and facilitating education research. National Academy of Sciences, Washington DCGoogle Scholar
  54. Porter AC, Blank RK, Smithson JL, Osthoff E (2005) Place randomized trials to test the effects of instructional practices of a Mathematics/Science professional development program for teachers. Annals of the American Academy of Political and Social Science 599:147–175CrossRefGoogle Scholar
  55. Raudenbush S (1997) Statistical analysis and optimal design for cluster randomized trials. Psychol Methods 2(2): 173–185CrossRefGoogle Scholar
  56. Sabin JE, Mazor K, Metereko V, Goff SL, Platt R (2008) Comparing drug effectiveness at health plan: the ethics of cluster randomized trials. Hastings Cent Rep 35(5):39–48CrossRefGoogle Scholar
  57. Schochet P (2009) Statistical power for random assignment evaluations of education programs. J Educ Behav Stat 34(2):238–266CrossRefGoogle Scholar
  58. Schultz TP (2004) School subsidies for the poor: evaluating the Mexican Progresa poverty program. J Dev Econ 74:199–250CrossRefGoogle Scholar
  59. Shadish WR, Clark MH, Steiner PM (2008) Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignments. J Am Stat Assoc 103(484):1334–1356CrossRefGoogle Scholar
  60. Shepard J (2003) Explaining feast or famine in randomized field experiments: medical science and criminology compared. Evaluation review 27:290–315CrossRefGoogle Scholar
  61. Sherman LW, Wesiburd D (1995) General deterrent effects of police patrol in crime “Hot spots.” A Randomized Controlled Trial. Justice Quarterly 12:626–648Google Scholar
  62. Sikkema KJ (2005) HIV prevention among women in low income housing developments: issues and intervention outcomes in a place randomized trial. Ann Am Acad Pol Soc Sci 599:52–70CrossRefGoogle Scholar
  63. Slavin R (Oct 6 2006) Research and Effectiveness. Education WeekGoogle Scholar
  64. Small DS, Ten Have TT, Rosenbaum PR (2008) Randomization inference in a group-randomized trial of treatments for depression: covariate adjusted, noncompliance, and quantile effects. J Am Stat Assoc 103(481):271–279CrossRefGoogle Scholar
  65. Spybrook J (2008) Are power analyses reported with adequate detail? J Res Educ Eff 1:215–235Google Scholar
  66. Taljaard M, Grimshaw J, Weijer C (2008) Ethical and policy issues in cluster randomized trials: proposal for research to the CIHR. Authors: University of Ottawa, Ottawa, CanadaGoogle Scholar
  67. Taljaard M, Weijer C, Grimshaw J, Bell Brown J, Binik A, Boruch R, Brejhaut J, Chaudry S, Eccles M, McRae A, Saginur R, Zwarenstein M, Donner A (2009) Ethical and policy issues in cluster randomized trials: rational and design of a mixed methods research study. Trials 10:61CrossRefGoogle Scholar
  68. Turner H, Boruch R, Petrosino A, Lavenberg J, de Moya D, Rothstein H (2003) Populating an international web based randomized trials register in the social, behavioral, criminological, and education sciences. Ann Am Acad Pol Soc Sci 589:203–225CrossRefGoogle Scholar
  69. Warburton AL, Sheppard JP (2000) Effectiveness of toughened glass in terms of reducing injury in bars: a randomized controlled trial. Inj Prev 6:36–40CrossRefGoogle Scholar
  70. Weisburd D (2003) Ethical practice and evaluation of interventions in crime and justice: the moral imperative for randomized trials. Eval Rev 27(3):336–354CrossRefGoogle Scholar
  71. Weisburd D (2005) Hot spots policing experiments and lessons from the field. Ann Am Acad Pol Soc Sci 599: 220–245CrossRefGoogle Scholar
  72. Weisburd D, Green L (1994) Defining the drug market: the case of the Jersey city DMA system. In McKenzie DL, Uchida CD (eds) Drugs and crime: evaluating public policy initiatives. Thousand Oaks California, Sage PublicationsGoogle Scholar
  73. Weisburd D, Lum C, Petrosino A (2001) Does research design affect study outcomes? Ann Am Acad Pol Soc Sci 578:50–70CrossRefGoogle Scholar
  74. Weisburd D, Wycoff L, Ready J, Eck JE, Hinkle JC, Gajewski F (2006) Des crime move around the corner? A controlled study of spatial displacement and diffusion of crime control benefits. Criminology 44(3):549–591CrossRefGoogle Scholar
  75. Weisburd D, Morris N, Ready J (2008) Risk focused policing at places: an experimental evaluation. Justice Q 25(1):200CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  • Robert Boruch
    • 1
  • David Weisburd
    • 2
    • 3
  • Richard Berk
    • 4
  1. 1.Center for Research and Evaluation in Social PolicyUniversity of PennsylvaniaPhiladelphiaUSA
  2. 2.Administration of Justice, George Mason UniversityManassasUSA
  3. 3.Institute of Criminology, Hebrew University of JerusalemJerusalemIsrael
  4. 4.Department of StatisticsThe Wharton School, University of PennsylvaniaPhiladelphiaUSA

Personalised recommendations