Abstract
In field experiments evaluating preventive interventions, nonparticipation of persons who decline to participate at the outset, or later, by not showing up to the experimental treatment, constitutes a serious threat to the validity of the findings. However, in social intervention programs nonparticipation is a commonplace reality. Since nonparticipation is a pervasive reality in virtually every type of social intervention program, field experiments that are used to test the intervention for effectiveness should be designed and evaluated with this reality in mind.
Demonstrated a procedure suggested by Bloom (1984) to provide estimates for the effects of an intervention on its actual participants compared to global effects on study participants in the intervention group, whether or not they showed up. Analyses were based on data collected in a field experiment that tested a preventive intervention for unemployed persons (Caplan, Vinokur, Price, & van Ryn, 1989). Effect size estimates were two to three times larger for the actual participant group than for the entire experimental group on employment outcomes (e.g., earnings) and mental health (anxiety and depression). Further analyses produced results showing that compared to participants, the nonparticipants achieved significantly higher levels of reemployment at posttests and did not differ significantly from participants on all other outcomes. The results suggest that persons who most needed the intervention and benefited from it were drawn into it through self-selection processes.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Bloom, H. S. (1984). Accounting for no-shows in experimental evaluation designs. Evaluation Review, 8, 225–246.
Caplan, R. D., Vinokur, A. D., Price, R. H., & van Ryn, M. (1989). Job seeking, reemployment, and mental health: A randomized field experiment in coping with job loss. Journal of Applied Psychology, 74, 759–769.
Cohen, J. (1977). Statistical power analysis for the behavioral sciences. New York: Academic Press.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation design and analysis issues for field settings. Boston: Houghton Mifflin.
Haney, C. (1991). Enhancing social support at the workplace: Assessing the effects of the caregiver support program. Health Education Quarterly, 18, 4.
Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica, 45, 153–161.
Kessler, R. G, Turner, J. B., & House, J. S. (1988). The effects of unemployment on health in a community survey: Main, modifying, and mediating effects. Journal of Social Issues, 44, 69–85.
U.S. Bureau of Labor Statistics. (1986, July). Current labor statistics: Employment data. Monthly Labor Review. Washington, DC: U.S. Bureau of Labor Statistics.
Yeaton, W. H., & Sechrest, L. (1981). Critical dimensions in the choice and maintenance of successful treatments: Strength, integrity, and effectiveness. Journal of Consulting and Clinical Psychology, 49, 156–157.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer Science+Business Media New York
About this chapter
Cite this chapter
Vinokur, A.D., Price, R.H., Caplan, R.D. (2002). From Field Experiments to Program Implementation: Assessing the Potential Outcomes of an Experimental Intervention Program for Unemployed Persons. In: Revenson, T.A., et al. Ecological Research to Promote Social Change. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-0565-5_3
Download citation
DOI: https://doi.org/10.1007/978-1-4615-0565-5_3
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-306-46728-8
Online ISBN: 978-1-4615-0565-5
eBook Packages: Springer Book Archive