There is a general agreement in the specialized literature on the need to design and conduct multi-strategy evaluation in health promotion and in social sciences. “Many community-based health interventions include a complex mixture of many disciplines, varying degrees of measurement difficulty and dynamically changing settings … understanding multivariate fields of action may require a mixture of complex methodologies and considerable time to unravel any causal relationship” (McQueen & Anderson, 2001, p. 77). The meaning of the term multi-strategy, however, varies greatly. For some, multi strategy corresponds to the use of multiple methods and information data that allow for the participative evaluation of multiple dimensions, like outcome, process, and social and political context (Carvalho, Bodstein, Hartz, & Matida, 2004; Pan American Health Organisation, 2003). For others, the support for using multiple methods and strategies is rooted in wills to deploy multi-paradigm designs (Goodstadt et al., 2001). More generally, however, the term refers to studies mixing qualitative and quantitative methods of enquiry (Gendron, 2001; Green & Caracelle, 1997). Exceptionally, in the evaluation literature, multi-strategy also refers to the possibility to mix all kinds of evaluation approaches or models from diverse categories, such as advocacy, responsive, and theory-driven evaluation (Yin, 1994; Datta, 1997a,b; Stufflebeam, 2001). In all these references, the use of multi-strategy evaluation is justified as the best approach to minimize validity problems in dealing with the complexity of multi-strategy interventions and in multi-centers evaluation research.
Unfortunately, in examining research synthesis studies it is often impossible to estimate the real utilization and the effective contribution of multi-strategy evaluation, despite the fact that such multi-strategy evaluation is largely recommended to improve knowledge resulting from health promotion intervention evaluations. Meta-analysis and other research synthesis methods are based on a very limited classification system of evaluation study design that consists in whether a Randomized Control Trial (RCT) has been used (Hulscher, Wensing, Grol, Weijden, & Weel, 1999; International Union for Health Promotion and Education, 1999). This impedes the capacity to judge the appropriateness of evaluation approaches, in particular for the multi-strategy interventions characterizing complex community-based actions.
Considering the additional difficulties associated with conceptual definitions of health promotion in community settings (Boutilier, Rajkumar, Poland, Tobin, & Badgley, 2001; Potvin & Richard, 2001) and the absence of a standardized typology for multi-strategy evaluations and their implications to research validity and practical utility, this chapter explores the approaches and multi-strategy models implemented by evaluators in health promotion. To do so, we carried out a systematic review of scientific articles reporting on community health promotion evaluation conducted in countries from any of the three Americas, between 2000 and 2005, and available through electronic databases until May 2005. We were further interested in assessing the quality of these evaluation studies using quality indicators derived from international standards of meta-evaluation adequacy and from health promotion principles and values.
Two questions guided our work: (1) What are the characteristics of health promotion intervention evaluation studies? and (2) To what extent do these studies conform to common and specific evaluation standards? The need for using specific standards comes from the fact that, in order to convincingly demonstrate both expected and unintended effects, evaluation must use methodological approaches that are congruent with the principles and values of complex community health promotion interventions.
Methods
The Meta-Evaluation Approach
Meta-evaluation, in an informal sense, has been around for as long as someone has recognized that evaluators are professionals and, like in other professional practices, the quality of their products must be assessed. Cooksy and Caracelli (2005) have underlined that meta-evaluations conducted on a set of studies are useful to identify strengths and weaknesses in evaluation practice. It serves the general goal of capacity development in the field of evaluation.
In short, meta-evaluation is the systematic evaluation of an evaluation study, mainly based on four categories of evaluation standards that have reached consensual agreement from the American Evaluation Associations (AEA) for the evaluation of social programs (Stufflebeam, 2001, 2004; Yarbrough, Shulha, & Caruthers, 2004), public health interventions (Centers for Diseases Control, 1999; Hartz, 2003; Moreira & Natal, 2006), and also for Community Programs (Baker, Davis, Gallerani, Sanchez, & Viadro, 2000). These four categories are defined as follows and the complete list of standards that were used in this study for each category is provided in Appendix 1.
The first category is labelled utility standards. It is composed of criteria concerned with whether the evaluation is useful. Together these criteria answer questions directly relevant to users. Three standards from this category were selected for this study. The second category is composed of feasibility standards that assess whether the evaluation makes sense. The single criterion selected from this category assesses whether interests from various relevant groups were taken into account in evaluation design. The third category is made of propriety standards and concern evaluation’s ethic. The three criteria selected assess whether evaluation was conducted in respect of the rights and interests of those involved in the intervention. The fourth category is composed of accuracy standards. The ten criteria selected relate to whether the evaluation conveyed technically adequate information regarding the determining features of merit of the evaluated program.
In addition to those four categories of standards and as an answer to concerns regarding international applications, the notion of open standards is now being developed to face the difficulties associated with transferring standard categories into different cultures and contexts (Love & Russon, 2004). According to Stufflebeam (2001), the main challenge in a meta-evaluation is one of balancing merit and worth in answering how the evaluation studies analyzed meet the requirements for a quality evaluation (merit) while fulfilling the audience’s needs for evaluative information (worth). Despite the fact that these standards are recognized by evaluators’ professional associations, these associations also recognize that standards are not recipes. They are useful starting points to develop trade-offs and adaptations for specific situations faced by meta-evaluators (Whorthen, Sanders, & Fitzpatrick, 1997).
Another category of open standards was defined for this meta-evaluation study. This category, called specificity standards, assesses whether the evaluation was theorized in accordance with community-based health promotion principles. Indeed, the complex nature of health promotion community interventions requires innovative and complex evaluative approaches, using a variety of methods that are coherent and consistent with initiatives that target changes at various levels. In addition, evaluation studies should be valid and allow the identification of theories and mechanisms by which actions and programs lead to changes in specific social contexts (Fawcett et al., 2001; Goodstadt et al., 2001; Goldberg, 2005). For this exploratory meta-evaluation, we adopted specific standards and criteria of a quality evaluation that follow three fundamental community-based health promotion principles: community capacity-building and accountability; disclosed theory or mechanisms of change; and multi-strategy evaluation. Multi-strategy evaluation was defined as an evaluation which combines quantitative/qualitative analyses and makes appropriate links between theory and methods, and process and outcome measures.
Based on these criteria, we assessed and gave a score to selected articles reporting on evaluation of community-based interventions. This scoring was performed in anonymous meta-evaluation format, in the same spirit than that of professional evaluators societies, i.e., to enhance the quality and credibility of knowledge resulting from evaluation studies (Stufflebeam, 2001).
Data Collection and Analysis
The first step was to select the articles to be included in the meta-evaluation. A systematic review of community-based health promotion program evaluation available in major data bases, such as CINHAL (Cumulative Index to Nursing & Allied Health Literature) and the Virtual Health Library of the Pan-American Health Organization registry, was undertaken. This registry was chosen for its ability to house English, French, Spanish, and Portuguese studies conducted throughout the Americas by taping into prominent scientific databases in the field of health promotion. These databases include Lilacs (Latin American and Caribbean Health Sciences), SCIELO (Scientific Electronic Library Online), and Medline (International Database for Medical Literature).
Three search terms were used to identify eligible references, namely: health promotion, program evaluation, and community. Our initial search lead to the identification of 58 references from the Lilacs-SCIELO (L&S), and 120 references from Medline and CINHAL (M&C), that moved on to the second round of analyses, where abstracts were reviewed for their adherence to the specified definition of community interventions in health promotion. Differences in Medline’s default search settings lead to slight modifications to our search specification, while restriction possibilities lead to fairly large differences to search results. Medline’s default search settings required the specification of residence characteristics attributed to community as a search term, and allowed both full text documents and evaluation studies to be used as search restrictions. The former restriction lead to the identification of 53 studies, which excluded systematic reviews of the literature, commentaries, books, and editorials, while the latter lead to the identification of 23 studies that were rated as evaluation studies by authors.
In a second step, 29% (17/58) of the abstracts referenced in L&S and 23% (28/120) of those in M&C were selected according to a broad definition of “community health promotion interventions”. The definition we used was based on Potvin & Richard (2001) and on Hills, Carrol, and O’Neill (2004), who restrict the term community interventions to interventions that use complex multiple strategies, focus on various targets of changes (individuals and environment changes), and engage communities with a minimum level of participation. Such interventions are generally characterized as community development, community mobilization, community-based intervention, and community-driven initiatives (Boutilier et al., 2001). The third and final step of article selection was based on the agreement of two reviewers who have read the complete texts. In the case of disagreement, a third reviewer was called. In this final stage, we selected articles that were designed to answer at least one evaluative question regarding the program under study, based on the Potvin, Haddad, and Frolich (2001, p. 51) five-category classification of evaluation questions. These are: (1) Relevance questions: How relevant are the program’s objectives to the target of change? (2) Coherence questions: How coherent with the theory of problem is the theory of treatment linking the program’s activities? (3) Responsiveness questions: How responsive is the program to changes in implementation and environmental conditions? (4) Achievement questions: What do the program’s activities and services achieve? (5) Results: To which changes are the program’s activities and services associated?
All 27 articles selected and listed in Appendix 2 (among which 19 are from North America) were read by two independent coders. Four dimensions adapted from Goodstadt et al. (2001, p. 530) were used to describe the program that was evaluated. These were: (1) the intervention goals (improve health and well-being, reduce mortality and morbidity, or both); (2) the level of the targeted changes as stated in the intervention objectives (enhance individual capacity, enhance community capacity, or develop supportive institutional and social environment); (3) the health promotion strategies used (health education, health communication, organizational development, policy development, intersectoral collaboration, or research); and (4) the main reported results. According to Goodstadt et al. (2001), model, health promotion actions should have goals that extend beyond reducing and preventing ill health to include improving health and well-being, focusing on different levels and determinants of health and adopting strategic and operational activities to reach objectives in the areas given priority by the Ottawa Charter.
Three dimensions have been coded to characterize the evaluation approaches used in evaluation studies. The first dimension relates to the question that guided the evaluation study (relevance, coherence, responsiveness, achievements, or results). The second dimension assesses the main focus of the evaluation (process, outcome, or both). The third dimension concerns the methods used (qualitative, quantitative, or mixte).
Finally, each evaluation study was rated using the four American Evaluation Association’s standards listed in Appendix 1 and the five criteria of the specificity standards designed for this study. Because many of the information required to assess the criteria of the American Evaluation Association standards were only available in original reports or in evaluability assessment studies, each standard category was assessed globally. Each standard category and each of the five specificity criteria were given a score ranging from 0 to 10 by two independent reviewers, following Stufflebeam’s (1999) classification: Poor 0–2; Fair 3–4; Good 5–6; Very Good 7–8; and Excellent 9–10. A correlation coefficient of 0.86 between the reviewer’s scores was estimated using three randomly selected articles. All statistical analyses were performed using Epiinfo 3.3.2.
Results
Table 14.1 presents the characteristics of programs evaluated in the selected articles. Two characteristics are in line with health promotion principles. As shown in Table 14.1, only a minority of the programs targeted the reduction of mortality and morbidity as program sole objectives. Another positive result is the fact that, in addition to individual level change objectives, the great majority of programs also target middle and macro level change objectives, this in 70% and 48% of cases respectively. Concerning the health promotion strategies adopted or the activities carried out to ensure that the objectives can be achieved, health education and communication are the two most often implemented and they appear to be always associated in local practices. Interestingly as well, all programs were composed of at least two types of actions meeting the minimal requirement for being labeled multi-strategy interventions. More interestingly, 20 out of 27 programs were made up of three or more components. The presence of research activities, as part of 13/27 interventions, seems also to indicate an integration of knowledge development as an intervention strategy. Less encouraging however is the fact that only a minority of programs address issues of public policies. As for the evaluation results, not surprisingly the majority of them reported improved awareness, skills, and behaviors. Only a few reported positive effects on public policies and increased equity.
Table 14.2 describes the main characteristics of the evaluation approaches implemented in the articles selected. It is interesting to note that evaluation studies seem to be covering a broad range of evaluation questions, overcoming the traditional dichotomies between process versus result evaluations, or between formative versus summative evaluation. Indeed, our results clearly illustrate the richness of using a typology of questions to characterize the evaluation focus, compared to categorizations based on the traditional dichotomy. Our results also show that the use of multi-strategy approaches to evaluation is still somewhat limited. Only 40% of the reported studies focus on a mixture of process and outcome, and 36% used a mix of quantitative and qualitative analyses. We will come back to the relevance of this dimension as a quality indicator of health promotion evaluative research in the discussion.
The second issue addressed in this chapter has to do with to the extent to which the evaluations meet common and specific evaluation standards. Figure 14.1 presents ratings given to the 27 selected evaluation studies on the five meta-evaluation standard categories and on the 5 criteria that form the specificity standard category. In general, published evaluation studies are of very high quality. Not surprisingly, standards of accuracy are the most commonly met, with almost 80% of studies (21/27) classified as very good or excellent. Conversely, specificity standards, related to whether the evaluation was theorized in accordance with community-based health promotion principles, are the least often met in our sample. Only 52% (14/27) obtained a very good or excellent rating. An examination of the various dimensions of the specificity standards show that 30% (8/27) of the reported evaluations had scores lower than 5,0 (fair) related to the appropriate use of theory (S1) and to the use of multi-strategy evaluation (S3).
It is also worth noting that there seems to be greater variation in quality when evaluations are assessed with standards specific to health promotion, rather than with common standards. Figure 14.2 shows that although the medians of the rating distributions are similar across standards, the range of ratings is broader for standards specific to health promotion evaluation.
Discussion and Conclusion
Overall, these results show that, unfortunately, there is not yet an appropriate relationship between interventions complexity level and approaches to evaluation. We agree with McKinlay’s (1996) regrets about the deficiency of process evaluation: “Most of disappointing large-scale and costly community interventions reported in recent years had no process evaluation, so it is impossible to know why they failed or whether perhaps they succeeded on some other level” (p. 240). However, there are examples of studies in which some reconciliation of process and outcome evaluation supported by an appropriate theory of change for complex community initiatives has been implemented. The study by Hughes and Traynor (2000), for example, illustrates how such an approach can enable accurate reporting on program’s results when implemented in different contexts.
Overall, evaluation practice needs to be better aligned with the principles of health promotion when evaluating community health promotion interventions. The intervention’s high degree of complexity is very seldom matched by multi-methods approaches to evaluation. With all the limitations associated with an explanatory meta-evaluation of a limited number of evaluation research reports, we think that three main messages can be drawn from this work.
The first message is that a relatively simple way to improve the usefulness and relevance of evaluation research for health promotion is to examine the quality of health promotion evaluation using both common and specific meta-evaluation standards. The systematic meta-evaluation using a broad range of common criteria and of criteria specific to health promotion allows a much better assessment of the field of health promotion than using the traditional dichotomous category such as process versus outcome evaluation, or experimental versus non-experimental designs. Our broad, inclusive strategy may have biased our sample of studies toward 19/25 showing positive results, in contradiction with Merzel and D’ Afflitti’s (2003) comments apparently, which state on the modest impact of community-based programs from the past 20 years. But it is also possible that Merzel and D’ Afflitti (2003) results were an artefact of them including criteria that limited their analysis to experiment-control study design, thus restricting the expression of intervention effectiveness “precisely because … the phenomena under study do not lend themselves to an application of that methodology” (De Leeuw & Skovgaard, 2005, p. 1338).
The second message is to plea for a better alignment of health promotion with the evaluation of health promotion. If we are really serious about the principles that health promotion interventions are multi-strategy, then we should require multi-strategy evaluations. This is the condition for us to be able to demonstrate both beneficial and detrimental effects. The development and use of health promotion specific quality criteria for the meta-evaluation of health promotion evaluation have to be encouraged. Our exploratory meta-evaluation shows that there are quality deficiencies on those specific criteria and that the performance of health promotion studies is much less consistent regarding such specific criteria compared to common criteria.
The third message is the reiteration of the hypothesis that the interventions demonstrated that effectiveness is not independent from the evaluation models implemented to study it. Given that among the six evaluation studies that showed negative results, five were multiple strategy interventions evaluated with single data analysis strategy, it would be interesting to conduct a larger meta-evaluation to test the relationship between the use of multi-strategy evaluation and the conclusion of the evaluation. Conversely, it would be critical to analyze the real meaning of positive evaluation results for studies with a low rating on health promotion specificity criteria. A meta-evaluation based in a “realistic synthesis” review (Pawson, 2003), grouping different programs and contexts with a common theoretical framework and mechanisms, could also increase the ability for highlighting the role of multi-strategy evaluation for constructing the case of an effective intervention. As noted, there is “a tendency to underrate and invalidate knowledge derived from a deductive process applied to theoretical knowledge and to overrate the accumulation of empirical observations even if the empirical basis is not sufficient” (Potvin, 2005, p. S97).
These messages, however, are to be taken in the light of the inherent problems of our meta-evaluation study that limit the generalization of our observations. The first has to do with the content validity of the ratings according to the four common standard categories (utility, feasibility, propriety, and accuracy), using an overall impression instead of a series of detailed criteria. The only source of information regarding the programs that were evaluated was the published evaluation result papers, thus limiting our ability to provide nuances to quality assessment substantially. Another problem, particularly important for evaluations carried out in Latin America, was the fact that, given the time and resource restrictions, we were not able to include the grey literature, and half of the selected studies were part of graduate study thesis. The small number of published articles available also limited our possibility to contrast the patterns of evaluation in North America and South America.
We would like to conclude this chapter with some empirical remarks. At this point, we do not have meta-evaluation criteria for the evaluation of complex multi-strategy health promotion interventions. It is therefore quite difficult to assess whether multi-strategy evaluations are most capable to provide valid results while evaluating such interventions.
References
Baker, Q. E., Davis, D. A., Gallerani, R., Sanchez, V., & Viadro, C. (2000). An evaluation framework of community health programs. Durham NC: The Center for Advancement of Community-Based Public Health. Downloaded in November 2007 from: www.cdc.gov/eval/evalcbph.pdf
Boutilier, M. A., Rajkumar, E., Poland, B. D., Tobin, S., & Badgley, R. F. (2001). Community action success in public health: Are we using a ruler to measure a sphere? Canadian Journal of Public Health, 92, 90–94.
Carvalho, A. I., Bodstein, R. C., Hartz, Z. M. A., & Matida, A. H. (2004). Concepts and approaches in the evaluation of health promotion. Ciência & Saùde Coletiva, 9, 521–544.
Cooksy, L. J., & Caracelli, V. J. (2005). Quality, context and use. Issues in achieving the goals of metaevaluation. American Journal of Evaluation, 26, 31–42.
Centers for Diseases Control. (1999). Framework for program evaluation in public health. MMWR, 48 (RR-11).
De Leeuw, E., & Skovgaard, T. (2005). Utility-driven evidence for health cities: Problems with evidence generation and application. Social Science & Medicine, 61, 1331–1341.
Datta, L. E. (1997a). A pragmatic basis for mixed-method designs. New Directions for Program Evaluations, 74, 33–46.
Datta, L. (1997b). Multimethod evaluations: Using case studies together with other methods. In E. Chelimsky, & W. Shadish (Eds.), Evaluation for the 21st century (pp. 344–359). Thousand Oaks: Sage.
Fawcett, S. B., Paine-Andrews, A., Francisco, V. T., Schultz, J., Richter, K. P., Berkley-Patton, J., et al. (2001). Evaluating community initiatives for health and development. In I. Rootman, M. Goodstadt, B. Hyndman, D.V. McQueen, L. Potvin, J. Springett, & E. Ziglio (Eds.), Evaluation in health promotion. Principles and perspectives (pp. 241–270). Copenhague: WHO regional publications. European series; No. 92.
Gendron, S. (2001). Transformative alliance between qualitative and quantitative approaches in health promotion research. In I. Rootman, M. Goodstadt, B. Hyndman, D.V. McQueen, L. Potvin, J. Springett, & E. Ziglio (Eds.), Evaluation in health promotion. Principles and perspectives (pp. 107–122). Copenhague: WHO regional publications. European series; No. 92.
Goldberg, C. (2005). The effectiveness conundrum in health promotion (work in progress).
Goodstadt, M., Hyndman, B., McQueen, D. V., Potvin, L., Rootman, I., & Springett, J. (2001). Evaluation in health promotion: synthesis and recommendations. In I. Rootman, M. Goodstadt, B. Hyndman, D.V. McQueen, L. Potvin, J. Springett, & E. Ziglio (Eds.), Evaluation in health promotion. Principles and perspectives (pp. 517–533). Copenhague: WHO regional publications. European series; No. 92.
Green, J. C., & Caracelle, V. J. (Eds.). (1997). Advances in mixed-method evaluation: The challenges and benefits for integrating diverse paradigms. New Directions for Program Evaluations, 74.
Hartz, Z. (2003). Significado, validade e limites do estudo de avaliação da descentralização da saùde na Bahia: uma meta-avaliação. Anais Congresso da Abrasco.
Hills, M. D., Carrol, S., & O’Neill, M. (2004). Vers un modèle d’évaluation de l’efficacité des interventions communautaires en promotion de la santé: compte-rendu de quelques développements Nord-américains récents. Promotion & Education, suppl. 1, 17–21.
Hughes, M., & Traynor, T. (2000). Reconciling process and outcome in evaluating community initiatives. Evaluation, 6, 37–49.
Hulscher, M. E. J. L, Wensing, M., Grol, R. P. T. M., Weijden, T. van der & Weel, C. van (1999). Interventions to improve the delivery of preventive services in primary care. American Journal of Public Health, 89, 737–746.
International Union for Health Promotion and Education (1999). The evidence of health promotion effectiveness. Sahping public health in a new Europe. Bruxelles: ECSC-EC-EAEC.
Love, A., & Russon, C. (2004). Evaluation standards in an international context. New Directions for Evaluation, 104(winter), 5–14.
McQueen, D. V., & Anderson, L. M. (2001). What counts as evidence: issues and debates. In I. Rootman, M. Goodstadt, B. Hyndman, D.V. McQueen, L. Potvin, J. Springett, & E. Ziglio (Eds.), Evaluation in health promotion. Principles and perspectives (pp. 63–81). Copenhague: WHO regional publications. European series; No. 92.
McKinlay, J. B. (1996). More appropriate methods for community-level health interventions. Evaluation Review, 20, 237–243.
Merzel, C., & D’Afflitti, J. (2003). Reconsidering community-based health promotion. Promise, performance and potential. American Journal of Public Health, 93, 557–574.
Moreira, E., & Natal, S. (Eds.). (2006) . Ensinando avaliação, vol.4. Brasil: Ministério da Saùde, CDC, ENSP/FIOTEC.
Pan American Health Organisation. (2003). Recomendações para formuladores de políticas nas Américas (GT municípios e Comunidades Saudáveis). Mimeo.
Pawson, R. (2003). Nothing as practical as a good theory. Evaluation, 9, 471–490.
Potvin, L., & Richard, L. (2001). The evaluation of community health promotion programmes. In I. Rootman, M. Goodstadt, B. Hyndman, D.V. McQueen, L. Potvin, J. Springett, & E. Ziglio (Eds.), Evaluation in health promotion. Principles and perspectives (pp. 213–240). Copenhague: WHO regional publications. European series; No. 92.
Potvin, L., Haddad, S., & Frolich, K.L. (2001). Beyond process and outcome evaluation: a comprehensive approach for evaluating health promotion programmes. In I. Rootman, M. Goodstadt, B. Hyndman, D.V. McQueen, L. Potvin, J. Springett, & E. Ziglio (Eds.), Evaluation in health promotion. Principles and perspectives (pp. 45–62). Copenhague: WHO regional publications. European series, No. 92.
Potvin, L. (2005). Why we should be worried about evidence-based practice in health promotion. Revista Brasileira de Saùde Maternal Infantil, Suppl.1, 2–8.
Stufflebeam, D. (1999). Program evaluations metavaluation checklist. Downloaded in November 2007 from: www.wmich.edu/evalctr/checklists/program_metaeval.htm
Stufflebeam, D. L. (2001). The metaevaluation imperative. American Journal of Evaluation, 2, 183–209.
Stufflebeam, D. L. (Ed.). (2001). Evaluation models. New Directions for Program Evaluation, 89, Spring.
Stufflebeam, D. L. (2004). A note on purposes, development and applicability of the Joint Committee Evaluation Standards. The American Journal of Evaluation, 25, 99–102.
Whorthen, B. R., Jr., Sanders, J. R., & Fitzpatrick, J. L. (1997). Evaluation: Alternative approaches and practical guidelines. New York: Longman.
Yarbrough, D. B., Shulha, L. M., & Caruthers, F. (2004). Background and history of the Joint Committee’s Program Evaluation Standards. New Direction for Evaluation, 104(winter), 15–30.
Yin, R. K. (1994). Discovering the future of the case study method in evaluation research. Evaluation Practice, 15, 283–290.
Acknowledgements
We wish to acknowledge Prof. Luis Claudio S. Thuler, for his valuable collaboration in the management of our database.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Appendices
Appendix 1 – Meta-evaluation Checklist for Health Promotion Programs, Adapted from Centers for Diseases Control (1999), Stufflebeam (2001), Goodstadt et al. (2001), and Goldberg (2005)
Utility standards: the evaluation will serve the information needs of intended users.
-
U1 – Stakeholder identification. The individuals involved in or affected by the evaluation should be identified so that their needs can be addressed.
-
U4 – Values identification. The perspectives, procedures, and rationale used to interpret the findings should be carefully described so that the bases for value judgments are clear.
-
U7 – Evaluation impact. Evaluations should be planned, conducted, and reported in ways that encourage the follow-through by stakeholders to increase the likelihood of the evaluation being used.
Feasibility standards: the evaluation will be realistic, prudent, diplomatic, and frugal.
-
F2 – Political viability. During planning and conduct of the evaluation, consideration should be given to the varied positions of interest groups so that their cooperation can be obtained and possible attempts by any group to curtail evaluation operations or to bias or misapply the results can be averted or counteracted.
Propriety standards: the evaluation will be conducted legally, ethically, and with regard to the welfare of those involved in the evaluation, as well as those affected by its results.
-
P1 – Service orientation. The evaluation should be designed to assist organizations in addressing and serving effectively the needs of the targeted participants.
-
P5 – Complete and fair assessment. The evaluation should be complete and fair in its examination and recording of strengths and weaknesses of the program so that strengths can be enhanced and problem areas addressed.
-
P6 – Disclosure of findings. The principal parties to an evaluation should ensure that the full evaluation findings with pertinent limitations are made accessible to the persons affected by the evaluation and any others with expressed legal rights to receive the results.
Accuracy standards: the evaluation will convey technically adequate information regarding the determining features of merit of the program.
-
A1 – Program documentation. The program being evaluated should be documented clearly and accurately.
-
A2 – Context analysis. The context in which the program exists should be examined in enough detail to identify probable influences on the program.
-
A3 – Described purposes and procedures. The purposes and procedures of the evaluation should be monitored and described in enough detail to identify and assess them.
-
A4 – Defensible information sources. Sources of information used in a program evaluation should be described in enough detail to assess the adequacy of the information.
-
A5 – Valid information. Information-gathering procedures should be developed and implemented to ensure a valid interpretation for the intended use.
-
A6 – Reliable information. Information-gathering procedures should be developed and implemented to ensure sufficiently reliable information for the intended use.
-
A7 – Systematic information. Information collected, processed, and reported in an evaluation should be systematically reviewed and any errors corrected.
-
A8 + A9 – Data analysis. Information should be analyzed appropriately and systematically so that evaluation questions are answered effectively.
-
A10 – Justified conclusions. Conclusions reached should be explicitly justified for stakeholders’ assessment.
Specificity standards: the evaluation was theorized in accordance with community-based health promotion principles.
-
S1 – Theory or mechanisms of change. The evaluation discloses the theory or mechanisms of change in a clear fashion (logic model of the evaluation).
-
S2 – Community capacity-building. The evaluation adheres to empowerment and community capacity-building principles (“participatory users”).
-
S3 – Multi-strategy evaluation. The evaluation combined quantitative and qualitative analyses that made appropriate links between theory & methods and process & outcomes measures.
-
S4 – Accountability. The evaluation provided information regarding community (stakeholder) accountability.
-
S5 – Effective practices. The evaluation helped spread effective practices.
Appendix 2 List of References for the Meta-evaluation Study
-
Becker, Edmundo, Nunes, Bonatto, D., & Souza, R. (2004). Empowerment e avaliação participativa em um programa de desenvolvimento local e promoção da saúde. Ciência & Saúde Coletiva, 9, 655–667.
-
Bodstein, R., Zancan, L., Ramos, C. L., & Marcondes, W. B. (2004). Avaliação da implantação do programa de desenvolvimento integrado em Manguinhos: Impasses na formulação de uma agenda local. Ciência & Saúde Coletiva, 9, 593–604.
-
Cabrera-Pivaral, C. E., Mayari, C. L. N., Trueba, J. M. A., Perez, G. J. G., Lopez, M. G. V, Figueroa, I. V., et al. (2002). Evaluación de dos estrategias de educación nutricional vía radio en Guadalajara, México. Cadernos de Saúde Pública, 18, 1289–1294.
-
Carrasquilla, G. (2001). An ecosystem approach to malaria control in an urban setting. Caernos de. Saúde Pública, 17(Suppl), 171–179.
-
Cheadle, A, Beery, W. L., Greenwald, H. P, Nelson, G. D., Pearson, D., & Senter, S. (2003). Evaluating the California Wellness Foundation’s health improvement initiative: A logic model approach. Health Promotion Practice, 4, 146–156.
-
Chiaravalloti, V. B., Morais, M. S., Chiaravalloti Neto, F., Conversani, D. T., Fiorin, A. M., Barbosa, A. A. C., et al. (2002). Avaliação sobre a adesão às práticas preventivas do dengue: O caso de Catanduva, São Paulo, Brasil. Cadernos de Saúde Pública, 18, 1321–1329.
-
Chrisman, N. J., Senturia, K., Tang, G., & Gheisar, B. (2002). Qualitative process evaluation of urban community work: A preliminary view. Health Education & Behavior, 29, 232–248.
-
Conrey, E. J., Frongillo, E. A., Dollahite, J. S., & Griffin, M. R. (2003). Integrated program enhancements increased utilization of farmers’ market nutrition program. Journal of Nutrition, 133, 1841–1844.
-
D’Onofrio, C. N., Moskowitz, J. M., & Braverman, M. T. (2002). Curtailing tobacco use among youth: Evaluation of Project 4-Health. Health Education & Behavior, 29, 656–682.
-
Figueiredo, R., & Ayres, J. R. C. M. (2002). Intervenção comunitária e redução da vulnerabilidade de mulheres às DST/Aids em São Paulo, SP. Revista de Saúde Pública, 36(4 Suppl), 96–107.
-
Figueroa, I.V., Alfaro, N. A., Guerra, J. F., Rodriguez, G. A., & Roaf, P. M. (2000). Una experiencia de educación popular en salud nutricional en dos comunidades del Estado de Jalisco, México. Cadernos de Saúde Pública, 16, 823–829.
-
Hawe, P., Shiell, A., Riley, T., & Gold, L. (2004). Methods for exploring implementation variation and local context within a cluster randomized community intervention trial. Journal of Epidemiology and Community Health, 58, 788–793.
-
Kelly, C. M., Baker, E. A., Williams, D., Nanney, M. S., & Haire-Joshu, D. (2004). Organizational capacity’s effects on the delivery and outcomes of health education programs. Journal of Public Health Management Practice, 10, 164–170.
-
Kim, S., Koniak-Griffin, D., Flaskerud, J. H., & Guarnero, P. A. (2004). The impact of lay health advisors on cardiovascular health promotion using a community-based participatory approach. Journal of Cardiovascular Nursing, 19, 192–199.
-
Lantz, P. M., Viruell-Fuentes, E., Israel, B. A., Softley, D., & Guzman, R. (2001). Can communities and academia work together on public health research? Evaluation results from a community-based participatory research partnership in Detroit. Journal of Urban Health, 78, 495–507.
-
Lima-Costa, M. F., Guerra, H. L., Firmo, J. O. A., Pimenta, F., Jr., & Uchoa, E. (2002). Um estudo epidemiológico da efetividade de um programa educativo para o controle da esquistossomose em Minas Gerais. Revista Brasileira de Epidemiologia, 5, 116–128.
-
MacLean, D., Farquharson, J., Heath, S., Barkhouse, K., Latter, C. & Joffres, C. (2003). Building capacity for heart health promotion: Results of a 5-year experience in Nova Scotia, Canada. American Journal of Health Promotion, 17, 202–212.
-
Markens, S., Fox, S. A., Taub, B., & Gilbert, M. L. (2002). Role of black churches in health promotion programs: Lessons from the Los Angeles mammography promotion in churches program. American Journal of Public Health, 92, 805–810.
-
McElmurry, B. J., Park, C. G., & Busch, A. G. (2003). The nurse-community health advocate team for urban immigrant primary health care. Journal of Nursing Scholarship, 35, 275–281.
-
Moody, K. A., Janis, C. C., & Sepples, S. B. (2003). Intervening with at-risk youth: Evaluation of the Youth Empowerment and Support Program. Pediatric Nursing, 29, 263–270.
-
Naylor, P-J., Wharf-Higgin, J., Blair, L., Green, L. W., & O’Connor, B. (2002). Evaluating the participatory process in a community-based heart health project. Social Science & Medicine, 55, 1173–1187.
-
Nuñez, D. E., Armbruster, C., Phillips, W. T., & Gale, B. J. (2003). Community-based senior health promotion program using a collaborative practice model: The escalante health partnerships. Public Health Nursing, 20, 25–32.
-
Quinn, M. T., & McNabb, W. L. (2001). Training lay health educators to conduct a church-based weight loss program for African American women. The Diabetes Educator, 27, 231–238.
-
Reininger, B. M., Vincent, M., Griffin, S. F., Valois, R. F., Taylor, D., Parra-Medina, D., et al. (2003). Evaluation of statewide teen pregnancy prevention initiatives: challenges, methods, and lessons learned. Health Promotion Practice, 4, 323–335.
-
Schulz, A. J., Zenk. S., Odoms-Young, A., Hollis-Neely, T., Nwankwo, R., Lockett, M., et al. (2005). Healthy eating and exercising to reduce diabetes: exploring the potential of social determinants of health frameworks within the context of community-based participatory diabetes prevention. American Journal of Public Health, 95, 645–651.
-
Stewart, A. L., Verboncoeur, C. J., McLellan, B. Y., Gillis, D. E., Rush, S., Mills, K. M., et al. (2001). Physical activity outcomes of CHAMPS II: A physical activity promotion program for older adults. Journal of Gerontology: Medical Sciences, 56A, 465–470.
-
Williams, J. H., Belle, G. A., Houston, C., Haire-Joshu, D., & Auslander, W. F. (2001). Process evaluation methods of a peer-delivered health promotion program for African American women. Health Promotion Practice, 2, 135–142.
Rights and permissions
Copyright information
© 2008 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
Hartz, Z., Goldberg, C., Figueiro, A.C., Potvin, L. (2008). Multi-strategy in the Evaluation of Health Promotion Community Interventions: An Indicator of Quality. In: Potvin, L., McQueen, D.V., Hall, M., de Salazar, L., Anderson, L.M., Hartz, Z.M. (eds) Health Promotion Evaluation Practices in the Americas. Springer, New York, NY. https://doi.org/10.1007/978-0-387-79733-5_14
Download citation
DOI: https://doi.org/10.1007/978-0-387-79733-5_14
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-79732-8
Online ISBN: 978-0-387-79733-5
eBook Packages: MedicineMedicine (R0)