Skip to main content
Log in

Lessons from the randomized trial evaluation of a new parent program: when the evaluators see the glass as half full, and the community sees the glass as half empty

  • Published:
Journal of Experimental Criminology Aims and scope Submit manuscript

Abstract

Objectives

To disseminate lessons learned from implementing a randomized trial in a community setting so that other randomized trials can anticipate and prevent some of the challenges we encountered.

Methods

A discussion of common challenges to the implementation of randomized trials and how the structure of our trial mitigated some of these, and a description of unanticipated challenges we encountered and how we addressed them.

Results

While we set up our randomized trial in a way that avoided some of the “pitfalls” of trials identified in the literature, we still encountered challenges that we did not anticipate. We undertook corrective actions to address these, and the caseflow of the trial improved.

Conclusion

All the lessons from our trial are variants of the same issue: ensuring sufficient buy-in among the program staff and community stakeholders. Even though we thought we had engaged in extensive activities to promote buy-in, it turned out that these efforts were not adequate. Trials would benefit from developing an outreach plan that targets individuals from across the organizational chart of involved organizations, is ongoing, and actively solicits concerns from stakeholders so that these can be addressed in a timely fashion. These activities represent a sizable amount of effort and need to be incorporated into project budgets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Boruch, R. F., & Wothke, W. (1985). Seven kinds of randomization plans for designing field experiments. In R. F. Boruch & W. Wothke (Eds.), Randomization and field experimentation (pp. 95–118). San Francisco: Jossey-Bass.

    Google Scholar 

  • Braught, G. N., & Reichardt, C. S. (1993). A computerized approach to trickle-process, random assignment. Evaluation Review, 17, 79–90.

    Article  Google Scholar 

  • Chen, H.-T. (2005). Practical program evaluation: Assessing and improving planning, implementation and effectiveness. Thousand Oaks, CA: Sage.

    Google Scholar 

  • Conner, R. F. (1977). Selecting a control group: an analysis of the randomization process in twelve social reform programs. Evaluation Quarterly, 1, 195–244.

    Article  Google Scholar 

  • Daro, D. A., McCurdy, K., Falconnier, L., & Stojanovic, D. (2003). Sustaining new parents in home visitation services: key participant and program factors. Child Abuse & Neglect, 27, 1101–1125.

    Article  Google Scholar 

  • Elliott, D. S., & Mihalic, S. (2004). Issues in disseminating and replicating effective prevention programs. Prevention Science, 5(1), 47–52.

    Article  Google Scholar 

  • Flay, B. R. (1986). Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine, 15, 451–474.

    Article  Google Scholar 

  • Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., Móscicki, E. K., Schinke, S., Valentine, J. C., & Ji, P. (2005). Standards of evidence: criteria for efficacy, effectiveness and dissemination. Prevention Science, 6(3), 151–175.

    Article  Google Scholar 

  • Freedman, B. (1987). Equipoise and the ethics of clinical research. The New England Journal of Medicine, 317(3), 141–145.

    Article  Google Scholar 

  • Johnson, K. (2009). State-based home visiting: Strengthening programs through state leadership, national center for children in poverty. New York: Columbia University.

    Google Scholar 

  • Kellam, S. G., & Langevin, D. J. (2003). A framework for understanding “evidence” in prevention research and programs. Prevention Science, 4(3), 137–153.

    Article  Google Scholar 

  • Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks: Sage.

    Google Scholar 

  • Sackett, D. L., & Hoey, J. (2000). Why randomized controlled trials fail but needn’t: a new series is launched. Canadian Medical Association Journal, 162, 1301–1302.

    Google Scholar 

  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental design for generalized causal inference. Boston: Houghton-Mifflin.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Rebecca Kilburn.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kilburn, M.R. Lessons from the randomized trial evaluation of a new parent program: when the evaluators see the glass as half full, and the community sees the glass as half empty. J Exp Criminol 8, 255–270 (2012). https://doi.org/10.1007/s11292-012-9152-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11292-012-9152-1

Keywords

Navigation