Skip to main content

Advertisement

Log in

Selection Effects and Prevention Program Outcomes

  • Published:
Prevention Science Aims and scope Submit manuscript

Abstract

A primary goal of the paper is to provide an example of an evaluation design and analytic method that can be used to strengthen causal inference in nonexperimental prevention research. We used this method in a nonexperimental multisite study to evaluate short-term outcomes of a preventive intervention, and we accounted for effects of two types of selection bias: self-selection into the program and differential dropout. To provide context for our analytic approach, we present an overview of the counterfactual model (also known as Rubin's causal model or the potential outcomes model) and several methods derived from that model, including propensity score matching, the Heckman two-step approach, and full information maximum likelihood based on a bivariate probit model and its trivariate generalization. We provide an example using evaluation data from a community-based family intervention and a nonexperimental control group constructed from the Washington State biennial Healthy Youth Survey (HYS) risk behavior data (HYS n = 68,846; intervention n = 1,502). We identified significant effects of participant, program, and community attributes in self-selection into the program and program completion. Identification of specific selection effects is useful for developing recruitment and retention strategies, and failure to identify selection may lead to inaccurate estimation of outcomes and their public health impact. Counterfactual models allow us to evaluate interventions in uncontrolled settings and still maintain some confidence in the internal validity of our inferences; their application holds great promise for the field of prevention science as we scale up to community dissemination of preventive interventions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Note that this is a weaker assumption than the strong ignorability condition, which requires unconditional independence.

  2. Our number of program variable refers to frequency, not location within the county, and hence refers to opportunity for access, not ease of access.

  3. The county-level variables are used as instrumental variables (explanatory variables that are correlated with unobserved predictors but not with outcome) in our estimation. Instrumental variable methods are explained in Technical Appendix B.

  4. As indicated in footnote 2, this variable refers to opportunity for access not ease of access, hence is unlikely to be related to completion.

  5. One alternative would have been to include dummy variables for each program; however, the program-specific risk factor averages and standard deviations contain more information. (The county-level dummies provide an additional control for non-independence of observations.) Another approach would have been to treat program as a random effect, but we are more interested in the fixed effects of programs than in the programs as a random sample of programs from which we wish to generalize to the population of programs at large (cf. Serlin et al. 2003).

References

  • Arendt, J.N. & Holm, A. (2006). Probit models with binary endogenous regressors (working paper 4/2006). Retrieved from Department of Business and Economics at the University of Southern Denmark website: http://static.sdu.dk/mediafiles/Files/Om_SDU/Institutter/Ivoe/Disc_papers/Disc_2006/dpbe4%202006%20pdf.pdf. Accessed 23 Oct 2012.

  • Arthur, M. W., Briney, J. S., Hawkins, J. D., Abbott, R. D., Brooke-Weiss, B. L., & Catalano, R. F. (2007). Measuring risk and protection in communities using the Communities that Care Youth Survey. Evaluation and Program Planning, 30, 197–211.

    Article  PubMed  Google Scholar 

  • Barnard, J., Frangakis, C. E., Hill, J. L., & Rubin, D. R. (2003). Principal stratification approach to broken randomized experiments. Journal of the American Statistical Association, 98, 299–323. doi:10.1198/016214503000071.

    Article  Google Scholar 

  • Berinsky, A. (2004). Silent voices: Opinion polls and political representation in America. Princeton: Princeton University Press.

    Google Scholar 

  • Bhattacharya, J., Goldman, D., & McCaffrey, D. (2006). Estimating probit models with self-selected treatments. Statistics in Medicine, 25, 389–413. doi:10.1002/sim.2226.

    Article  PubMed  Google Scholar 

  • Biglan, A., Hood, D., Brozovsky, P., Ochs, L., Ary, D., & Black, C. (1991). Subject attrition in prevention research. NIDA Research Monograph, 107, 213–234.

    CAS  PubMed  Google Scholar 

  • Bushway, S., Johnson, B. D., & Slocum, L. A. (2007). Is the magic still there? The use of the Heckman two-step correction for selection bias in criminology. Journal of Quantitative Criminology, 23, 151–178. doi:10.1007/s10940-007-9024-4.

    Article  Google Scholar 

  • Cook, T. D., & Steiner, P. M. (2010). Case matching and the reduction of selection bias in quasi experiments: The relative importance of pretest measures of outcome, of unreliable measurement, and of mode of data analysis. Psychological Methods, 15, 56–68. doi:10.1037/a0018536.

    Article  PubMed  Google Scholar 

  • Cook, T. D., Shadish, W. R., & Wong, V. C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons. Journal of Policy Analysis and Management, 27, 724–750.

    Article  Google Scholar 

  • Dehejia, R. H., & Wahba, S. (2002). Propensity score-matching methods for nonexperimental causal studies. Review of Economics and Statistics, 84, 151–161.

    Article  Google Scholar 

  • Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327–350.

    Article  PubMed  Google Scholar 

  • Foster, E. M. (2010). Casual inference and developmental psychology. Developmental Psychology, 46, 1454–1480.

    Article  PubMed  Google Scholar 

  • Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica, 47, 153–161.

    Article  Google Scholar 

  • Heckman, J. J., Ichimura, H., Smith, J., & Todd, P. (1996). Sources of selection bias in evaluating social programs: An interpretation of conventional measures and evidence on the effectiveness of matching as a program evaluation method. Proceedings of the National Acadamy of Science, 93, 13416–13420.

    Article  CAS  Google Scholar 

  • Hill, L. G., Goates, S. G., & Rosenman, R. (2010). Detecting selection effects in community implementations of family-based substance abuse prevention programs. American Journal of Public Health, 100, 623–630.

    Article  PubMed Central  PubMed  Google Scholar 

  • Kumpfer, K. L., Molgaard, V., & Spoth, R. (1996). The Strengthening Families Program for the prevention of delinquency and drug use. In R. D. V. Peters & R. J. McMahon (Eds.), Preventing childhood disorders, substance abuse, and delinquency. Banff International Behavioral Science Series (Vol. 3) (pp. 241–267). Thousand Oaks: Sage Publications.

    Chapter  Google Scholar 

  • Lahiri, K., & Song, J. G. (2000). The effect of smoking on health using a sequential self-selection model. Health Economics, 9, 491–511.

    Article  CAS  PubMed  Google Scholar 

  • Lesaffre, E., & Molenberghs, G. (1991). Multivariate probit analysis: A neglected procedure in medical statistics. Statistics in Medicine, 10, 1391–1403.

    Article  CAS  PubMed  Google Scholar 

  • Lochman, J. E., & van den Steenhoven, A. (2002). Family-based approaches to substance abuse prevention. The Journal of Primary Prevention, 23, 49–114.

    Article  Google Scholar 

  • Maxwell, S. E. (2010). Introduction to the special section on Campbell's and Rubin's conceptualizations of causality. Psychological Methods, 15, 1–2.

    Article  PubMed  Google Scholar 

  • Maydeu-Olivares, A., Coffman, D. L., & Hartmann, W. M. (2007). Asymptotically distribution-free (ADF) interval estimation of coefficient alpha. Psychological Methods, 12, 157–176.

    Article  PubMed  Google Scholar 

  • McGowan, H. M., Nix, R. L., Murphy, S. A., & Bierman, K. L. (2010). Investigating the impact of selection bias in dose–response analyses of preventive interventions. Prevention Science, 11, 239–251.

    Article  PubMed Central  PubMed  Google Scholar 

  • Neyman, J. (1990). On the application of probability theory to agricultural experiments: Essay on principles. Section 9. Statistical Science, 5, 465–480.

    Google Scholar 

  • Pearl, J. (2000). Causality: Models, reasoning, and inference. Cambridge: Cambridge University Press.

    Google Scholar 

  • Redmond, C., Spoth, R., & Trudeau, L. (2002). Family- and community-level predictors of parent support seeking. Journal of Community Psychology, 30, 153–171.

    Article  Google Scholar 

  • Roodman, D.M. (2007). CMP: Stata module to implement conditional (recursive) mixed process estimator. Statistical software components. http://ideas.repec.org/c/boc/bocode/s456882.html. Accessed 23 Oct 2012.

  • Roodman, D. (2009). Estimating fully observed recursive mixed-process models with cmp. http://www.cgdev.org/content/publications/detail/1421516. Accessed 23 Oct 2012.

  • Rosenman, R., Mandal, B., Tennekoon, V., & Hill, L.G. (2010). Estimating treatment effectiveness with sample selection (working paper 2010-05). Retrieved from School of Economic Sciences at Washington State University website: http://faculty.ses.wsu.edu/WorkingPapers/Rosenman/WP2010-5.pdf. Accessed 23 Oct 2012.

  • Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66, 688–701. doi:10.1037/h0037350.

    Article  Google Scholar 

  • Rubin, D. B. (1997). Estimating causal effects from large data sets using propensity scores. Annals of Internal Medicine, 127, 757–763.

    Article  CAS  PubMed  Google Scholar 

  • Rubin, D. B. (2004). Teaching statistical inference for causal effects in experiments and observational studies. Journal of Educational and Behavioral Statistics, 29, 343–367.

    Article  Google Scholar 

  • Rubin, D. B. (2008). For objective causal inference, design trumps analysis. Annals of Applied Statistics, 2, 808–840.

    Article  Google Scholar 

  • Serlin, R. C., Wampold, B. E., & Levin, J. R. (2003). Should providers of treatment be regarded as a random factor? If it ain't broke, don't “fix” it: A comment on Siemer and Joormann (2003). Psychological Methods, 8, 524–534.

    Article  PubMed  Google Scholar 

  • Shadish, W. R. (2010). Campbell and Rubin: A primer and comparison of their approaches to causal inference in field settings. Psychological Methods, 15-, 3–17.

    Article  PubMed  Google Scholar 

  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton, Mifflin and Company.

    Google Scholar 

  • Spoth, R., Randall, G. K., & Shin, C. (2008). Increasing school success through partnership-based family competency training: Experimental study of long-term outcomes. School Psychology Quarterly, 23, 70–89.

    Article  PubMed Central  PubMed  Google Scholar 

  • US Department of Labor, Bureau of Labor Statistics (2012). http://www.bls.gov/data/#unemployment. Accessed 23 Oct 2012.

  • Washington Office of Financial Management (2010). http://www.ofm.wa.gov/localdata/default.asp. Accessed 23 Oct 2012.

  • Washington State Department of Health (2006). Washington State Healthy Youth Survey. http://www.doh.wa.gov/Portals/1/Documents/Pubs/WashingtonStateHYS2006.pdf. Accessed 23 Oct 2012.

  • West, S. G., & Thoemmes, F. (2010). Campbell’s and Rubin’s perspectives on causal inference. Psychological Methods, 15, 18–37

    Google Scholar 

Download references

Acknowledgments

This study was supported in part by the National Institute of Drug Abuse (grants R21 DA025139-01Al and R21 DA19758-01). We thank the Washington State Department of Health for providing the supplementary data sample, and we thank the program providers and families who participated in the program evaluation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Laura G. Hill.

Additional information

The first two authors are listed alphabetically and contributed equally to the conceptualization and writing of the manuscript.

Electronic Supplementary Materials

Below is the link to the electronic supplementary material.

ESM 1

[Insert caption here] (docx 134 kb)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hill, L.G., Rosenman, R., Tennekoon, V. et al. Selection Effects and Prevention Program Outcomes. Prev Sci 14, 557–569 (2013). https://doi.org/10.1007/s11121-012-0342-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11121-012-0342-x

Keywords

Navigation