Skip to main content

The Evaluation of Prevention and Health Promotion Programs

  • Chapter
  • 50 Accesses

Abstract

The evaluation of prevention and health promotion programs is one component of the broader field of evaluation research or social program evaluation. Evaluation research applies the practices and principles of social research to assess the conceptualization, design, implementation, effectiveness, and efficiency of social interventions (Rossi & Freeman, 1993). Prevention program evaluation is one component of evaluation research that draws on knowledge and traditions from several disciplines and fields of study, including public health, psychology, sociology, education, social work, social policy, and public administration.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Affholter, D.P. (1994). Outcome monitoring. In J.S. Wholey, H.P. Hatry, & K.E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 96–118). San Francisco: Jossey-Bass.

    Google Scholar 

  • Albee, G.W. (1996). Revolutions and counterrevolutions in prevention. American Psychologist , 51 ,1130–1133.

    PubMed  CAS  Google Scholar 

  • Allen, H., Cordes, H., & Hart, J. (1999). Vitalizing communities: Building on assets and mobilizing for collective action. Lincoln, NE: University of Nebraska-Lincoln.

    Google Scholar 

  • Andrew, J.A., & Duncan, S.C. (1998). The effect of attitude on the development of adolescent cigarette use. Journal of Substance Abuse , 10, 1–7.

    Google Scholar 

  • Beamish, W., & Bryer, F. (1999). Programme quality in Australian early special education: An example of participatory action research. Child Care, Health and Development , 25(6), 457–472.

    PubMed  CAS  Google Scholar 

  • Bloom, H.S. (1999). Using cluster random assignment to measure program impacts: Statistical implica for the evaluation of education pro- grams. Evaluation Review , 23(4), 445–469.

    PubMed  CAS  Google Scholar 

  • Bloom, H.S., Bos, J.M., & Lee, S. (1999). Using cluster random assignment to measure program impacts: Statistical implica for the evaluation of education programs. Evaluation Review , 23(4), 445–469.

    PubMed  CAS  Google Scholar 

  • Boruch, R.F. (1997). Randomized experiments for planning and evaluation: A practical guide. Thousand Oaks, CA: Sage.

    Google Scholar 

  • Braden, J.P., & Bryant, T.J. (1990). Regression discontinuity designs: Applications for school psychologists. School Psychology Review , 19(2), 232–240.

    Google Scholar 

  • Bruyere, S. (1993). Participatory action research: An overview and implica for family members of individuals with disabilities. Journal of Vocational Rehabilitation , 3(2), 62–68.

    Google Scholar 

  • Campbell, D.T. (1969). Reforms as experiments. American Psychologist , 24, 409–429.

    Google Scholar 

  • Campbell, D.T. (1974). Qualitative knowing in action research. Kurt Lewin Award Address, Society for the Psychological Study of Social Issues, presented at the 82nd annual meeting of the American Psychological Association, New Orleans, LA.

    Google Scholar 

  • Campbell, D.T. (1996). Regression artifacts in time-series and longitudinal data. Evaluation and Program Planning , 19(4), 377–389.

    Google Scholar 

  • Campbell, D.T., & Stanley, J.C. (1966). Experimental and quasi-experimental designs for research. Skokie, IL: Rand McNally.

    Google Scholar 

  • Card, J.J., Greeno, C., & Peterson, J.L. (1992). Planning an evaluation and estimating its cost. Evaluation and Program Planning , 15(4), 75–89.

    CAS  Google Scholar 

  • Cook, T.D. (1985). Postpositivist critical multiplism. In L. Shotland & M.M. Mark (Eds.), Social science and social policy (pp. 21–62). Beverly Hills: Sage.

    Google Scholar 

  • Cook, T.D., & Campbell, D.T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Skokie, IL: Rand McNally.

    Google Scholar 

  • Cook, T.D., & Shadish, W.R. (1994). Social experiments: Some develop- ments over the past fifteen years. Annual Review of Psychology , 45, 545–580.

    Google Scholar 

  • Cronbach, L.J. (1982). Designing evaluations of educational and social programs. San Francisco: Jossey-Bass.

    Google Scholar 

  • Cronbach, L.J. (1986). Social inquiry by and for earthlings. In D.W. Fiske & R.A. Schweder (Eds.), Meta theory in social science (pp. 83–107). Chicago: University of Chicago Press.

    Google Scholar 

  • Cunningham, L.E., Michielutte, R., Dignan, M., Sharp, P., & Boxley, J. (2000). The value of process evaluation in a community-based cancer control program. Evaluation and Program Planning , 23 13–25.

    CAS  Google Scholar 

  • Curran, P.J., Stice, E., & Chassin, L. (1997). The relation between adolescent alcohol use and peer alcohol use: A longitudinal random coefficients model. Journal of Consulting and Clinical Psychology , 65, 130–140.

    PubMed  CAS  Google Scholar 

  • Devine, J.A., Brody, C.J., & Wright, J.D. (1997). Evaluating an alcohol and drug treatment program for the homeless: An econometric approach. Evaluation and Program Planning , 20(2), 205–215.

    Google Scholar 

  • Dignan, M.B., & Carr, P.A. (1987). Program planning for health education and promotion. Philadelphia: Lea & Febiger.

    Google Scholar 

  • Duncan, T.E., Duncan, S.C., & Hops, H. (1998). Latent variable modeling of longitudinal and multilevel alcohol use data. Journal of Studies on Alcohol , 59, 399–408.

    PubMed  CAS  Google Scholar 

  • Fishman, D.B. (1999). The case for pragmatic psychology. New York: New York University Press.

    Google Scholar 

  • Forehand, G.A. (Ed.). (1982). Applications of time series analysis to evaluation. San Francisco: Jossey-Bass.

    Google Scholar 

  • Furlong, M.J., Casas, J.M., Corral, C., & Gordon, M. (1997). Changes in substance use patterns associated with the development of a community partnership project. Evaluation and Program Planning , 20(3), 299–305.

    Google Scholar 

  • Gaber, J. (2000). Meta-needs assessment. Evaluation and Program Planning , 23(1), 139–147.

    Google Scholar 

  • Gabriel, R.M. (1997). Community indicators of substance abuse: Empowering coalition planning and evaluation. Evaluation and Program Planning , 20(3), 335–343.

    Google Scholar 

  • Gibbons, R.D., Hedeker, D., Elkin, I., Waternaux, C., Kraemer, H.C., Greenhouse, J.B., Shea, M.T., Imber, S.D., Sotsky, S.M., Watkins, J.T. (1993). Some conceptual and statistical issues in analysis of longitudinal psychiatric data. Archives of General Psychiatry , 50, 739–750.

    PubMed  CAS  Google Scholar 

  • Girden, E.R. (1992). ANOVA repeated measures. Thousand Oaks, CA: Sage.

    Google Scholar 

  • Gordon, R. (1987). An operational classification of disease prevention. In J. Steinberg & M. Silverman (Eds.), Preventing mental disorders: A research perspective (pp. 20–26) (DHHS Publication No. ADM 87–1492). Rockville, MD: Alcohol, Drug Abuse, and Mental Health Administration.

    Google Scholar 

  • Guba, E.G., & Lincoln, Y.S. (1981). Effective evaluation: Improving the usefulness of evaluation results through responsive and naturalistic approaches. San Francisco: Jossey-Bass.

    Google Scholar 

  • Hargreaves, W.A., Shumway, M., Hu, T., & Cuffel, B. (1998). Cost-outcome methods for mental health. San Diego: Academic Press.

    Google Scholar 

  • Harrow, B.S., & Lasater, T.M. (1996). A strategy for accurate collection of incremental cost data for cost-effectiveness analyses in field trials. Evaluation Review , 20(3), 275–290.

    PubMed  CAS  Google Scholar 

  • Hawe, P., Degeling, D., & Hall, J. (1990). Evaluating health promotion: A health worker’s guide. Sydney: MacLennan & Petty.

    Google Scholar 

  • Hedeker, D., Gibbons, R.D., & Flay, B.R. (1994). Random-effects regression models for clustered data with an example from smoking prevention research. Journal of Consulting and Clinical Psychology , 62(4), 757–765.

    PubMed  CAS  Google Scholar 

  • Hedeker, D., McMahon, S.D., Jason, L.A., & Salina, D. (1994). Analysis of clustered data in community psychology: With an example from a worksite smoking cessation project. American Journal of Community Psychology , 22(5), 595–615.

    PubMed  CAS  Google Scholar 

  • Heinsman, D.T., & Shadish, W.R. (1996). Assignment methods in experimentation: When do nonrandomized experiments approximate answers from randomized experiments? Psychological Methods , 1, 154–169.

    Google Scholar 

  • Heller, K., & Monahan, J. (1977). Psychology and community change. Homewood, IL: Dorsey Press.

    Google Scholar 

  • Hendricks, M. (1994). Making a splash: Reporting evaluation results effectively. In J.S. Wholey, H.P. Hatry, & K.E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 549–575). San Francisco: Jossey-Bass.

    Google Scholar 

  • Hennessy, M., & Greenberg, J. (1999). Bringing it all together: Modeling intervention processes using structural equation modeling. American Journal of Evaluation , 20(3), 471–480.

    Google Scholar 

  • Hernandez, M. (2000). Using logic models and program theory to build out- come accountability. Education and Treatment of Children , 23(1), 24–40.

    Google Scholar 

  • Hess, B. (2000). Assessing program impact using latent growth modeling: A primer for the evaluator. Evaluation and Planning , 23(4), 419–428.

    Google Scholar 

  • Horst, P., Nay, J.N., Scanlon, J.W., & Wholey, J.S. (1974). Program management and the federal evaluator. Public Administration Review , 34(4), 300–308.

    Google Scholar 

  • Humphreys, K. (1993). Expanding the pluralist revolution: A comment on Omer and Strenger (1992). Psychotherapy , 30, 176–177.

    Google Scholar 

  • Hurley, S. (1990). A review of cost-effectiveness analyses. Medical Journal of Australia , 153(Suppl.), S20–3.

    PubMed  Google Scholar 

  • Jaeger, M.E., & Rosnow, R.L. (1988). Contextualism and its implica for psychological inquiry. British Journal of Psychology , 79, 63–75.

    Google Scholar 

  • Johnson, R.B. (1998). Toward a theoretical model of evaluation utilization. Evaluation and Program Planning , 21, 93–110.

    Google Scholar 

  • Kellam, S.G., Koretz, D., & Moscicki, E.K. (1999). Core elements of developmental epidemiologically-based prevention research. American Journal of Community Psychology , 27, 463–482.

    PubMed  CAS  Google Scholar 

  • Kellow, J.T. (1998). Beyond statistical significant tests: The importance of using other estimates of treatment effects to interpret evaluation results. American Journal of Evaluation , 19(1), 123–134.

    Google Scholar 

  • Keppel, G. (1991). Design and Analysis: A Researcher’s Handbook (3rd ed.). Englewood Cliffs, NJ: Prentice-Hall.

    Google Scholar 

  • Koch, R., Cairns, J.M., & Brunk, M. (2000). How to involve staff in developing an outcomes-oriented organization. Education and Treatment of Children , 23(1), 41–47.

    Google Scholar 

  • Koepke, D., & Flay, B.R. (1989). Levels of analysis. In M.T. Braverman (Ed.), Evaluating health promotion programs: New directions for pro- gram evaluation (pp. 75–87). San Francisco: Jossey-Bass.

    Google Scholar 

  • Kretzmann, J., & McKnight, J. (1996). Mobilizing community assets: Program for building communities from the inside out. Chicago: ACTA Publications.

    Google Scholar 

  • Levine, M., & Perkins, D.V. (1987). Principles of community psychology. New York: Oxford.

    Google Scholar 

  • Linney, J.A., & Wandersman, A. (1991). Prevention plus III: Assessing alcohol and other drug prevention programs at the school and community level. Washington, DC: US Department of Health & Human Services.

    Google Scholar 

  • Lipsey, M.W., & Wilson, D.B. (1993). The efficacy of psychological, educational, and behavioral treatment: Confirmation from meta-analysis. American Psychologist , 48, 1181–1209.

    PubMed  CAS  Google Scholar 

  • Lipsey, M., & Cordray, D.S. (2000). Evaluation methods for social inter- vention. Annual Review of Psychology , 51, 345–375.

    PubMed  CAS  Google Scholar 

  • Long, B.B. (1989). The Mental Health Association and prevention. Prevention in Human Services , 6, 5–44.

    PubMed  CAS  Google Scholar 

  • Marcantonio, R.J., & Cook, T.D. (1994). Convincing quasi-experiments: The interrupted time series and regression-discontinuity designs. In J.S. Wholey, H.P. Hatry, & K.E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 133–154). San Francisco: Jossey-Bass.

    Google Scholar 

  • Mark, M.M. (1986). Validity typologies and the logic and practice of quasi-experimentation . New Directions for Program Evaluation , 31, 47–66.

    Google Scholar 

  • McCleary, R., & Hay, R.A. (1980). Applied time series analysis for the social sciences. Newbury Park, CA: Sage.

    Google Scholar 

  • McGraw, S.A., & Sellers, D.E. (1996). Using process data to explain out- comes: An illustration from the Child and Adolescent Trial for Cardiovascular Health (CATCH). Evaluation Review , 20(20), 291–312.

    PubMed  CAS  Google Scholar 

  • McGuire, W.J. (1983). A contextualist theory of knowledge: Its implica tions for innovation and reform in psychological research. In L. Berkowitz (Ed.), Advances in experimental social psychology (pp. 1–4–7). New York: Academic Press.

    Google Scholar 

  • McGuire, W.J. (1986). A perspectivist looks at contextualism and the future of behavioral science. In R.L. Rosnow & M. Georgundi (Eds.), Contextualism and understanding in behavioral science (pp. 271–303). New York: Pergamon.

    Google Scholar 

  • Merriam, S. (1988). Case study research in education. San Francisco: Jossey-Bass.

    Google Scholar 

  • Millar, A., Simeone, R.S., & Carnevale, J.T. (2001). Logic models: A sys- tems tool for performance management. Evaluation and Program Planning , 24, 73–81.

    Google Scholar 

  • Mohr, L.B. (1988). Impact analysis for program evaluation. Chicago: The Dorsey Press.

    Google Scholar 

  • Morrisey, E., Wandersman, A., Seybolt, D., Nation, M., Crusto, C., & Davino, K. (1997). Toward a framework for bridging the gap between science and practice in prevention: A focus on evaluator and practitioner perspectives. Evaluation and Program Planning , 20(3), 367–377.

    Google Scholar 

  • Mowbray, C., Bybee, D., Collins, M., & Levine, P. (1998). Optimizing evaluation quality and utility under resource constraints. Evaluation and Program Planning , 21, 59–71.

    Google Scholar 

  • Mrazek, P.J., & Haggerty, R.J. (Eds.). (1994). Reducing risks for mental disorder: Frontiers for preventive intervention research. Washington, DC: Institute of Medicine, National Academy Press.

    Google Scholar 

  • Muñoz, R.F., Mrazek, P.J., & Haggerty, R.J. (1996). Institute of Medicine report on prevention of mental disorders. American Psychologist , 51, 1116–1122.

    PubMed  Google Scholar 

  • Murray, D.M., & McKinlay, S.M. (1994). Design and analysis issues in community trials. Evaluation Review , 18(4), 493–514.

    Google Scholar 

  • Murray, D.M., Moskowitz, J.M., & Dent, C.W. (1996). Design and analy- sis issues in community-based drug abuse prevention. American Behavioral Sciences , 39, 853–867.

    Google Scholar 

  • Murrell, S.A. (1977). Utilization of needs assessment for community decision-making. American Journal of Community Psychology , 5, 461–468.

    Google Scholar 

  • National Institute of Mental Health. (1996). A plan for prevention research at the National Institute of Mental Health: A report by the National Advisory Mental Health Council (NIH Publication No. 96–4093). Bethesda, MD: National Institutes of Health.

    Google Scholar 

  • National Institute of Mental Health. (1998). Priorities for prevention research at NIMH: A report by the National Advisory Mental Health Council (NIH Publication No. 98–2079). Bethesda, MD: National Institutes of Health.

    Google Scholar 

  • O’Sullivan, R.G., & O’Sullivan, J.M. (1998). Evaluation voices: Promoting evaluation from within programs through collaboration. Evaluation and Program Planning , 21, 21–29.

    Google Scholar 

  • Osgood, D.W., & Smith, G.L. (1995). Applying hierarchical linear model- ing to extended longitudinal evaluation: The Boys Town follow-up study. Evaluation Review , 19(1), 3–38.

    Google Scholar 

  • Patton, M.Q. (1978). Utilization-focused evaluation. Beverly Hills: Sage.

    Google Scholar 

  • Patton, M.Q. (1980). Qualitative evaluation methods. Beverly Hills, CA: Sage.

    Google Scholar 

  • Patton, M.Q. (1997). Utilization-focused evaluation (3rd ed.). Beverly Hills: Sage.

    Google Scholar 

  • Petrosino, A. (2000). Mediators and moderators in the evaluation of pro- grams for children: Current practice and agenda for improvement. Evaluation Review , 24(1), 47–72.

    PubMed  CAS  Google Scholar 

  • Price, R.H. (1974). Etiology, the social environment, and the prevention of psychological dysfunction. In P. Insel & R. Moos (Eds.), Health and the social environment (pp. 74–89). Lexington, MA: Heath.

    Google Scholar 

  • Price, R.H., & Smith, S.S. (1985). A guide to evaluating prevention pro- grams in mental health (DHHS Publication No. ADM 85–1365). Washington, DC: US Government Printing Office.

    Google Scholar 

  • Rappaport, J. (1977). Community psychology. New York: Holt, Rinehart & Winston.

    Google Scholar 

  • Reichardt, C.S., & Trochim, W.M.K. (1995). Reports of the death of regression-discontinuity analysis are greatly exaggerated. Evaluation Review , 79(1), 39–64.

    Google Scholar 

  • Reicken, H.W., Boruch, R.F., Campbell, D.T., Caplan, N., Glennan, T.K., Pratt, J.W., Rees, A., & Williams, W. (1974). Social experimentation: A method for planning and evaluating social intervention. New York: Academic Press.

    Google Scholar 

  • Reiss, D., & Price, R.H. (1996). National research agenda for prevention research. The National Institute of Mental Health report. American Psychologist , 51, 1109–1115.

    PubMed  CAS  Google Scholar 

  • Reynolds, A.J., & Temple, J.A. (1995). Quasi-experimental estimates of the effects of a preschool intervention. Evaluation Review , 19(4), 347–373.

    Google Scholar 

  • Rogers, E.S., & Palmer-Erbs, V. (1994). Participatory action research: implica for research and evaluation in psychiatric rehabilitation. Psychosocial Rehabilitation Journal , 18(2), 3–12.

    Google Scholar 

  • Rosenbaum, D.P., & Hanson, G.S. (1998). Assessing the effects of school-based drug education: A six-year multilevel analysis of Project D.A.R.E. Journal of Research in Crime & Delinquency , 35(4), 381–412.

    Google Scholar 

  • Rosnow, R.L., & Georgoudi, M. (Eds.). (1986). Contextualism and under- standing in behavioral science. implica for research and theory. New York: Praeger.

    Google Scholar 

  • Rossi, P.H., & Freeman, H.E. (1985). Evaluation: A systematic approach (3rd ed.). Newbury Park: Sage.

    Google Scholar 

  • Rossi, P.H., & Freeman, H.E. (1993). Evaluation: A systematic approach (5th ed.). Newbury Park: Sage.

    Google Scholar 

  • Rossi, PH., & Freeman, H.E., & Lipsey, M. (1999). Evaluation: A system- atic approach (6th ed.). Newbury Park: Sage.

    Google Scholar 

  • Rowe, W.E. (1997). Changing ATOD norms and behaviors: A Native American community commitment to wellness. Evaluation and Program Planning , 20(3), 323–333.

    Google Scholar 

  • Saxe, L., Reber, E., Hallfors, D., Kadushin, C., Jones, D., Rindskopf, D., & Beveridge, A. (1997). Think globally, act locally: Assessing the impact of community-based substance abuse prevention. Evaluation and Program Planning , 20(3), 357–366.

    Google Scholar 

  • Schalock, R.L., & Thornton, C. (1988). Program evaluation: A field guide for administrators. New York: Plenum.

    Google Scholar 

  • Scheirer, M.A. (1994). Designing and using process evaluation. In J.S. Wholey, H.P. Hatry, & K.E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 40–68). San Francisco: Jossey-Bass.

    Google Scholar 

  • Schmitt, N., Sacco, J.M., Ramey, S., & Chan, D. (1999). Parental employment, school climate, and children’s academic and social development. Journal of Applied Psychology , 84, 737–753.

    PubMed  CAS  Google Scholar 

  • Scriven, M. (1980). The logic of evaluation. Inverness, CA: Edgepress.

    Google Scholar 

  • Sechrest, L., & Figueredo, A.J. (1993). Program evaluation. Annual Review of Psychology , 44, 645–674.

    PubMed  CAS  Google Scholar 

  • Sechrest, L., & Sidani, S. (1995). Quantitative and qualitative methods: Is there an alternative? Evaluation and Program Planning , 18(1), 77–87.

    Google Scholar 

  • Shadish, W.R. (1995). Philosophy of science and the quantitative-qualitative debates: Thirteen common errors. Evaluation and Planning , 18(1), 63–75.

    Google Scholar 

  • Shadish, W.R., & Ragsdale, K. (1996). Random versus nonrandom assignment in controlled experiments: Do you get the same answer? Journal of Consulting and Clinical Psychology , 64, 1290–1305.

    PubMed  CAS  Google Scholar 

  • Shadish, W.R., Jr., Cook, T.D., & Leviton, L.C. (1991). Foundations of program evaluation: Theories of practice. Newbury Park, CA: Sage.

    Google Scholar 

  • Shaw, R.A., Rosati, M.J., Salzman, P., Coles, C.R., & McGeary, C. (1997). Effects on adolescent ATOD behaviors and attitudes of a five-year community partnership. Evaluation and Program Planning , 20(3), 307–313.

    Google Scholar 

  • Sledge, W.H., Tebes, J.K., Wolff, N., & Helminiak, T. (1996). Inpatient vs. crisis respite care: Part II-Service utilization and costs. American Journal of Psychiatry , 153, 1074–1083.

    PubMed  CAS  Google Scholar 

  • Snow, D.L., & Tebes, J.K. (1991). Experimental and quasi-experimental designs in prevention research. In C.G. Leukefeld & W. Bukoski (Eds.), Drug abuse prevention intervention research: Methodological issues (pp. 140–158) (NIDA Research Monograph 107). Washington, DC: US Government Printing Office.

    Google Scholar 

  • Stake, R.E. (1975). An interview with Robert Stake on responsive evalua tion. In R.E. Stake (Ed.), Evaluating the arts in education: A responsive approach (pp. 33–38). Columbus, OH: Merrill.

    Google Scholar 

  • Stake, R.E. (1978). The case study method in social inquiry. Educational Researcher , 7, 5–8.

    Google Scholar 

  • Stake, R.E. (1994). Case studies. In N.K. Denzin & Y.S. Lincoln (Eds.), Handbook of qualitative research (pp. 236–247). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Stanley, T.D. (1991). “Regression-disconuity design” by any other name might be less problematic. Evaluation Review , 15(5), 605–624.

    Google Scholar 

  • Suchman, E. (1967). Evaluative research. New York: Russell Sage.

    Google Scholar 

  • Taylor, S.J., & Bogdan, R. (1998). Introduction to qualitative research methods (3rd ed.). New York: John Wiley & Sons.

    Google Scholar 

  • Tebes, J.K. (1997, May). Self-help, prevention, and scientific knowledge. Invited paper presented at the Self-Help Pre-Conference of the 5th Biennial Conference of the Society for Community Research and Action, Columbia, SC.

    Google Scholar 

  • Tebes, J.K. (2000). External validity and scientific psychology. American Psychologist , 55(12), 1508–1509.

    Google Scholar 

  • Tebes, J.K., & Kraemer, D.T. (1991). Quantitative and qualitative knowing in mutual support research: Some lessons from the recent history of scientific psychology. American Journal of Community Psychology , 19, 739–756.

    Google Scholar 

  • Tebes, J.K., & Helminiak, T.H. (1999). Measuring costs and outcomes in mental health. Mental Health Services Research , 1(2), 119–121.

    Google Scholar 

  • Tebes, J.K., Kaufman, J.S., Connell, C., & Ross, E. (2001, June). Designing an evaluation to inform public policy. In J.K. Tebes (Chair), Real world contexts in program evaluation. Symposium conducted at the Eighth Biennial Conference of the Society for Community Research and Action, Atlanta, GA.

    Google Scholar 

  • Tebes, J.K., Kaufman, J.S., & Chinman, M.J. (2002). Teaching about prevention to mental health professionals. In D. Glenwick & L. Jason (Eds.), Innovative approaches to the prevention of psychological prob- lems. New York: Springer.

    Google Scholar 

  • Thompson, B. (1993). The use of statistical significance tests in research: Bootstrap and other alternatives. Journal of Experimental Education , 61, 361–377.

    Google Scholar 

  • Trochim, W.M.K. (1984). Research design for program evaluation: The regression discontinuity approach. Beverly Hills, CA: Sage.

    Google Scholar 

  • US General Accounting Office. (1991). Program evaluation and methodology division: Designing evaluations. Washington, DC: Author.

    Google Scholar 

  • Viadro, C.I., Earp, A.L., & Altpeter, M. (1997). Designing a process evaluation for a comprehensive breast cancer screening intervention: Challenges and opportunities. Evaluation and Program Planning (3), 237–249.

    Google Scholar 

  • W.K. Kellogg Foundation. (2000). Logic model development guide: Using logic models to bring together planning, evaluation and action. Battle Creek, MI: Author.

    Google Scholar 

  • Wandersman, A., Imm, P., Chinman, M., & Kaftarian, S. (2000). Getting to outcomes: A results-based approach to accountability. Evaluation and Planning , 23(3), 389–395.

    Google Scholar 

  • Webb, E.J., Campbell, D.T., Schwartz, R.D., & Sechrest, L.B. (1966). Unobtrusive measures: Nonreactive research in the social sciences. Chicago: Rand McNally.

    Google Scholar 

  • Weiss, C.H. (1972). Evaluation research: Methods for assessing program effectiveness. Englewood Cliffs, NJ: Prentice-Hall.

    Google Scholar 

  • Weiss, C.H. (1997). How can theory-based evaluation make greater head- way? Evaluation Review , 21(4), 501–524.

    Google Scholar 

  • Wholey, J.S. (1979). Evaluation: Promise and performance. Washington, DC: Urban Institute.

    Google Scholar 

  • Wholey, J.S. (1983). Evaluation and effective public management. Boston: Little, Brown.

    Google Scholar 

  • Wholey, J.S., Hatry, H.P., & Newcomer, K.E. (Eds.). (1994). Handbook of practical program evaluation. San Francisco: Jossey-Bass.

    Google Scholar 

  • Whyte, W.F. (1989). Advancing scientific knowledge through participatory action research. Sociological Forum , 4(3), 367–385.

    Google Scholar 

  • Winett, R.A. (1995). A framework for health promotion and disease prevention programs. American Psychologist , 50(5), 341–350.

    PubMed  CAS  Google Scholar 

  • Winett, R.A. (1998). Prevention: A proactive-developmental-ecological perspective. In T.H. Ollendick & M. Hersen (Eds.), Handbook of child psychopathology (3rd ed., pp. 637–671). New York: Plenum Press.

    Google Scholar 

  • Wolff, N., Helminiak, T.W., & Tebes, J.K. (1997). Getting the cost right in cost-effectiveness analyses. American Journal of Psychiatry , 154(6), 736–743.

    PubMed  CAS  Google Scholar 

  • Woodruff, S.I. (1997). Random-effects models for analyzing clustered data from a nutrition education intervention. Evaluation Review , 21(6), 688–697.

    PubMed  CAS  Google Scholar 

  • Yin, R.K., & Kaftarian, S.J. (1997). Introduction: Challenges of community-based program outcome evaluations. Evaluation and Program Planning , 20(3), 293–297.

    Google Scholar 

  • Yin, R.K., Kaftarian, S.J., Ping, Y., & Jansen, M.A. (1997). Outcomes from CSAP’s community partnership program: Findings from the national cross-site evaluation. Evaluation and Program Planning , 20(3), 345–355.

    Google Scholar 

Download references

Authors

Editor information

Thomas P. Gullotta Martin Bloom Jonathan Kotch Craig Blakely Lynne Bond Gerald Adams Colette Browne Waldo Klein Jessica Ramos

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer Science+Business Media New York

About this chapter

Cite this chapter

Tebes, J.K., Kaufman, J.S., Connell, C.M. (2003). The Evaluation of Prevention and Health Promotion Programs. In: Gullotta, T.P., et al. Encyclopedia of Primary Prevention and Health Promotion. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-0195-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-0195-4_5

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-4961-7

  • Online ISBN: 978-1-4615-0195-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics