Advertisement

Multi-strategy in the Evaluation of Health Promotion Community Interventions: An Indicator of Quality

  • Zulmira Hartz
  • Carmelle Goldberg
  • Ana Claudia Figueiro
  • Louise Potvin

There is a general agreement in the specialized literature on the need to design and conduct multi-strategy evaluation in health promotion and in social sciences. “Many community-based health interventions include a complex mixture of many disciplines, varying degrees of measurement difficulty and dynamically changing settings … understanding multivariate fields of action may require a mixture of complex methodologies and considerable time to unravel any causal relationship” (McQueen & Anderson, 2001, p. 77). The meaning of the term multi-strategy, however, varies greatly. For some, multi strategy corresponds to the use of multiple methods and information data that allow for the participative evaluation of multiple dimensions, like outcome, process, and social and political context (Carvalho, Bodstein, Hartz, & Matida, 2004; Pan American Health Organisation, 2003). For others, the support for using multiple methods and strategies is rooted in wills to deploy multi-paradigm designs...

Keywords

Health Promotion Evaluation Study Health Promotion Intervention Health Promotion Strategy Traditional Dichotomy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

We wish to acknowledge Prof. Luis Claudio S. Thuler, for his valuable collaboration in the management of our database.

References

  1. Baker, Q. E., Davis, D. A., Gallerani, R., Sanchez, V., & Viadro, C. (2000). An evaluation framework of community health programs. Durham NC: The Center for Advancement of Community-Based Public Health. Downloaded in November 2007 from: www.cdc.gov/eval/evalcbph.pdf
  2. Boutilier, M. A., Rajkumar, E., Poland, B. D., Tobin, S., & Badgley, R. F. (2001). Community action success in public health: Are we using a ruler to measure a sphere? Canadian Journal of Public Health, 92, 90–94.Google Scholar
  3. Carvalho, A. I., Bodstein, R. C., Hartz, Z. M. A., & Matida, A. H. (2004). Concepts and approaches in the evaluation of health promotion. Ciência & Saùde Coletiva, 9, 521–544.Google Scholar
  4. Cooksy, L. J., & Caracelli, V. J. (2005). Quality, context and use. Issues in achieving the goals of metaevaluation. American Journal of Evaluation, 26, 31–42.CrossRefGoogle Scholar
  5. Centers for Diseases Control. (1999). Framework for program evaluation in public health. MMWR, 48 (RR-11).Google Scholar
  6. De Leeuw, E., & Skovgaard, T. (2005). Utility-driven evidence for health cities: Problems with evidence generation and application. Social Science & Medicine, 61, 1331–1341.CrossRefGoogle Scholar
  7. Datta, L. E. (1997a). A pragmatic basis for mixed-method designs. New Directions for Program Evaluations, 74, 33–46.CrossRefGoogle Scholar
  8. Datta, L. (1997b). Multimethod evaluations: Using case studies together with other methods. In E. Chelimsky, & W. Shadish (Eds.), Evaluation for the 21st century (pp. 344–359). Thousand Oaks: Sage.Google Scholar
  9. Fawcett, S. B., Paine-Andrews, A., Francisco, V. T., Schultz, J., Richter, K. P., Berkley-Patton, J., et al. (2001). Evaluating community initiatives for health and development. In I. Rootman, M. Goodstadt, B. Hyndman, D.V. McQueen, L. Potvin, J. Springett, & E. Ziglio (Eds.), Evaluation in health promotion. Principles and perspectives (pp. 241–270). Copenhague: WHO regional publications. European series; No. 92.Google Scholar
  10. Gendron, S. (2001). Transformative alliance between qualitative and quantitative approaches in health promotion research. In I. Rootman, M. Goodstadt, B. Hyndman, D.V. McQueen, L. Potvin, J. Springett, & E. Ziglio (Eds.), Evaluation in health promotion. Principles and perspectives (pp. 107–122). Copenhague: WHO regional publications. European series; No. 92.Google Scholar
  11. Goldberg, C. (2005). The effectiveness conundrum in health promotion (work in progress).Google Scholar
  12. Goodstadt, M., Hyndman, B., McQueen, D. V., Potvin, L., Rootman, I., & Springett, J. (2001). Evaluation in health promotion: synthesis and recommendations. In I. Rootman, M. Goodstadt, B. Hyndman, D.V. McQueen, L. Potvin, J. Springett, & E. Ziglio (Eds.), Evaluation in health promotion. Principles and perspectives (pp. 517–533). Copenhague: WHO regional publications. European series; No. 92.Google Scholar
  13. Green, J. C., & Caracelle, V. J. (Eds.). (1997). Advances in mixed-method evaluation: The challenges and benefits for integrating diverse paradigms. New Directions for Program Evaluations, 74.Google Scholar
  14. Hartz, Z. (2003). Significado, validade e limites do estudo de avaliação da descentralização da saùde na Bahia: uma meta-avaliação. Anais Congresso da Abrasco.Google Scholar
  15. Hills, M. D., Carrol, S., & O’Neill, M. (2004). Vers un modèle d’évaluation de l’efficacité des interventions communautaires en promotion de la santé: compte-rendu de quelques développements Nord-américains récents. Promotion & Education, suppl. 1, 17–21.Google Scholar
  16. Hughes, M., & Traynor, T. (2000). Reconciling process and outcome in evaluating community initiatives. Evaluation, 6, 37–49.CrossRefGoogle Scholar
  17. Hulscher, M. E. J. L, Wensing, M., Grol, R. P. T. M., Weijden, T. van der & Weel, C. van (1999). Interventions to improve the delivery of preventive services in primary care. American Journal of Public Health, 89, 737–746.Google Scholar
  18. International Union for Health Promotion and Education (1999). The evidence of health promotion effectiveness. Sahping public health in a new Europe. Bruxelles: ECSC-EC-EAEC.Google Scholar
  19. Love, A., & Russon, C. (2004). Evaluation standards in an international context. New Directions for Evaluation, 104(winter), 5–14.CrossRefGoogle Scholar
  20. McQueen, D. V., & Anderson, L. M. (2001). What counts as evidence: issues and debates. In I. Rootman, M. Goodstadt, B. Hyndman, D.V. McQueen, L. Potvin, J. Springett, & E. Ziglio (Eds.), Evaluation in health promotion. Principles and perspectives (pp. 63–81). Copenhague: WHO regional publications. European series; No. 92.Google Scholar
  21. McKinlay, J. B. (1996). More appropriate methods for community-level health interventions. Evaluation Review, 20, 237–243.CrossRefPubMedGoogle Scholar
  22. Merzel, C., & D’Afflitti, J. (2003). Reconsidering community-based health promotion. Promise, performance and potential. American Journal of Public Health, 93, 557–574.CrossRefPubMedGoogle Scholar
  23. Moreira, E., & Natal, S. (Eds.). (2006) . Ensinando avaliação, vol.4. Brasil: Ministério da Saùde, CDC, ENSP/FIOTEC.Google Scholar
  24. Pan American Health Organisation. (2003). Recomendações para formuladores de políticas nas Américas (GT municípios e Comunidades Saudáveis). Mimeo.Google Scholar
  25. Pawson, R. (2003). Nothing as practical as a good theory. Evaluation, 9, 471–490.CrossRefGoogle Scholar
  26. Potvin, L., & Richard, L. (2001). The evaluation of community health promotion programmes. In I. Rootman, M. Goodstadt, B. Hyndman, D.V. McQueen, L. Potvin, J. Springett, & E. Ziglio (Eds.), Evaluation in health promotion. Principles and perspectives (pp. 213–240). Copenhague: WHO regional publications. European series; No. 92.Google Scholar
  27. Potvin, L., Haddad, S., & Frolich, K.L. (2001). Beyond process and outcome evaluation: a comprehensive approach for evaluating health promotion programmes. In I. Rootman, M. Goodstadt, B. Hyndman, D.V. McQueen, L. Potvin, J. Springett, & E. Ziglio (Eds.), Evaluation in health promotion. Principles and perspectives (pp. 45–62). Copenhague: WHO regional publications. European series, No. 92.Google Scholar
  28. Potvin, L. (2005). Why we should be worried about evidence-based practice in health promotion. Revista Brasileira de Saùde Maternal Infantil, Suppl.1, 2–8.Google Scholar
  29. Stufflebeam, D. (1999). Program evaluations metavaluation checklist. Downloaded in November 2007 from: www.wmich.edu/evalctr/checklists/program_metaeval.htm
  30. Stufflebeam, D. L. (2001). The metaevaluation imperative. American Journal of Evaluation, 2, 183–209.Google Scholar
  31. Stufflebeam, D. L. (Ed.). (2001). Evaluation models. New Directions for Program Evaluation, 89, Spring.Google Scholar
  32. Stufflebeam, D. L. (2004). A note on purposes, development and applicability of the Joint Committee Evaluation Standards. The American Journal of Evaluation, 25, 99–102.CrossRefGoogle Scholar
  33. Whorthen, B. R., Jr., Sanders, J. R., & Fitzpatrick, J. L. (1997). Evaluation: Alternative approaches and practical guidelines. New York: Longman.Google Scholar
  34. Yarbrough, D. B., Shulha, L. M., & Caruthers, F. (2004). Background and history of the Joint Committee’s Program Evaluation Standards. New Direction for Evaluation, 104(winter), 15–30.CrossRefGoogle Scholar
  35. Yin, R. K. (1994). Discovering the future of the case study method in evaluation research. Evaluation Practice, 15, 283–290.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • Zulmira Hartz
    • 1
  • Carmelle Goldberg
  • Ana Claudia Figueiro
  • Louise Potvin
    • 1
  1. 1.Department at the National School of Public Health (ENSP/Fiocruz) in Rio de JaneiraoBrazil

Personalised recommendations