Advertisement

Figurative Thinking and Models: Tools for Participatory Evaluation

  • Denis Allard
  • Angèle Bilodeau
  • Sylvie Gendron

In sociological terms, an evaluation can be considered as a collective decision to step back, take a second look, and formulate a judgement on a public program. This collective decision is usually borne by a limited number of actors who elaborate their thinking with the advice and support of an evaluator. In the past two decades, major developments in the field of evaluation have emerged through the practice of “participatory evaluation.” This approach requires an expansion of the number of actors beyond the initial proponents and the evaluator so as to expand as much as possible the scope of the reflection. A public program involves many actors, all of whom have interests at stake, some of which are liable to be divergent. When judgements are made without somehow including the diverse stakeholders or their spokespersons, issues concerning the results and their utilization are more likely to surface (Weiss, 1983a). Over the years, evaluators have become increasingly aware of the...

Keywords

Evaluation Project Program Theory Public Health Authority Participatory Evaluation Partner Notification 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Abma, T. A. (2006). The practice and politics of responsive evaluation. American Journal of Evaluation, 27, 31–43.CrossRefGoogle Scholar
  2. Allard, D., & Adrien A. (2007). Infection au VIH et personnes ne prenant pas les préecautions néecessaires afin d’éeviter la transmission du virus - Évaluation d’implantation du Comitée d’aide aux intervenants. Montréeal: Direction de santée publique.Google Scholar
  3. Allard, D., Audet, C., St-Laurent, D., & Chevalier, S. (2003). Évaluation du programme expéerimental quéebéecois de traitement des joueurs pathologiques – Rapport 6 – Monitorage éevaluatif – Entrevues initiales auprèes des déecideurs et des coordonnateurs. Quéebec: Institut national de santée publique du Quéebec.Google Scholar
  4. Allard, D., Bilodeau, A., & Lefebvre, C. (2007). Le travail du planificateur public en situation de partenariat, In M.- J. Fleury, M. Tremblay, H. Nguyen & L. Bordeleau (Eds.), Le systèeme sociosanitaire au Quéebec – Gouvernance, réegulation et participation (pp. 479–494). Montréeal: Gaëtan Morin.Google Scholar
  5. Allard, D., & Ferron, M. (2000). Évaluation du programme PAD-PRAT – Vie et reproduction d’un programme. Montréeal: Institut de recherche en santée et séecuritée du travail.Google Scholar
  6. Allard, D., Kimpton, M. A., Papineau, É., & Audet, C. (2006). Évaluation du programme expéerimental sur le jeu pathologique – Monitorage éevaluatif – Entrevues avec les directions et les coordonnateurs sur l’organisation des services et leur éevolution. Quéebec: Institut national de santée publique du Quéebec.Google Scholar
  7. Ascher, F. (2005). La méetaphore est un transport. Des idées sur le mouvement au mouvement des idées. Cahiers internationaux de Sociologie, CXVIII, 37–54.Google Scholar
  8. Barel, Y. (1989). Le paradoxe et le systèeme – Essai sur le fantastique social. Grenoble: Presses universitaires de Grenoble.Google Scholar
  9. Barnes, M., Matka, E., & Sullivan, H. (2003). Evidence, understanding and complexity – Evaluation in non-linear systems. Evaluation, 9, 265–284.CrossRefGoogle Scholar
  10. Barton, A. (1965). Le concept d’espace d’attributs en sociologie. In R. Boudon & P. Lazarsfeld (Eds.), Le vocabulaire des sciences sociales (pp. 140–170). Paris: Mouton.Google Scholar
  11. Bertalanffy, L. von (1968). General system theory. New York: George Braziller.Google Scholar
  12. Bilodeau, A., Allard, D., & Chamberland, C. (1998). L’Évaluation participative des prioritées réegionales de préevention-promotion de la santée et du bien-être. Montréeal: Direction de santée publique.Google Scholar
  13. Brandon, P. R. (1999). Involving program stakeholders in reviews of evaluators’ recommendations for program revisions. Evaluation and Program Planning, 22, 363–372.CrossRefGoogle Scholar
  14. Case, D. D., Grove, T., & Apted, C. (1990). The community’s toolbox: The idea, methods and tools for participatory assessment, monitoring and evaluation in community forestry. Bangkok: Regional Wood Energy Development Program in Asia (May 8, 2002); http://www.fao.org/docrep/x5307e/x5307e00.htm
  15. Chambers, D. E., Wedel, K. R., & Rodwell, M. K. (1992). Evaluating Social Programs. Boston: Allyn and Bacon.Google Scholar
  16. Chinman, M., Imm, P., & Wandersman, A. (2004). Getting to outcomes 2004 – Promoting accountability through methods and tools for planning, implementation and evaluation. Santa Monica: Rand Corporation.Google Scholar
  17. De Coster, M. (1978). L’analogie en sciences humaines. Paris: Presses universitaires de France.Google Scholar
  18. Fetterman, D. M. (2001). Foundations of empowerment evaluation. Thousand Oaks: Sage.Google Scholar
  19. Fournier, D. M. (1995), Establishing evaluative conclusions: A distinction between general and working logic. New Directions for Evaluation, 68, 15–31.CrossRefGoogle Scholar
  20. Goertzen, J. R., Hampton, M.R., & Jeffery, B. L. (2003). Creating logic models using grounded theory: A case example demonstrating a unique approach to logic model development. Canadian Journal of Program Evaluation, 18, 115–138.Google Scholar
  21. Gottfredson, G. G. (1986). A theory-ridden approach to program evaluation – A method for stimulating researcher-implementer collaboration. Evaluation Studies Review Annual, 11, 522–533.Google Scholar
  22. Guba E. G., & Lincoln Y. S. (1989). Fourth generation evaluation. Newbury Park: Sage.Google Scholar
  23. House, E. R. (1983). How we think about evaluation. New Directions for Program Evaluation, 19, 5–25.CrossRefGoogle Scholar
  24. Hummelbrunner, R. (2004). A systems approach to evaluation – Applications of systems theory and systems thinking in evaluations. Lausanne: Fourth European Evaluation Society Conference.Google Scholar
  25. Juan, S. (1999). Méethodes de recherche en sciences sociohumaines – Exploration critique des techniques. Paris: Presses universitaires de France.Google Scholar
  26. Kaminsky, A. (2000). Beyond the literal: Metaphors and why they matter. New Directions for Evaluation. 86, 69–80.CrossRefGoogle Scholar
  27. Lapierre, J. W. (1992). L’analyse des systèemes – L’application aux sciences sociales. Paris: Syros.Google Scholar
  28. Léeger, J. M., & Florand, M. F. (1985). L’analyse de contenu: deux méethodes, deux réesultats?”. In A. Blanchet & R. Ghiglione (Eds.), L’entretien dans les sciences sociales (pp. 237–273). Paris: Dunod.Google Scholar
  29. Le Moigne, J. L. (1977). La théeorie du systèeme géenéeral – Théeorie de la modéelisation. Paris: Presses universitaires de France.Google Scholar
  30. McKie, L. (2003). Rhetorical spaces : Participation and pragmatism in the evaluation of community health work. Evaluation, 9, 307–324CrossRefGoogle Scholar
  31. Mark, M. M., Henry, G. T., & Julnes, G. (2000). Evaluation – An integrated framework for understanding, guiding, and improving policies and programs. San Francisco: Jossey-Bass.Google Scholar
  32. Michel, J. L. (1994). La schéematisation de l’avenir – Autour des travaux de Robert Estivals. Revue de bibliologie, schéema et schéematisation, 4, 35–58.Google Scholar
  33. Miles, M. B., & Huberman, A. M. (2003). Analyse des données qualitatives. Bruxelles : De Boeck.Google Scholar
  34. Monnier, ée. (1992). Évaluations de l’action des pouvoirs publics. Paris: Economica.Google Scholar
  35. Morin, E. (1986). La méethode – 3. La connaissance de la connaissance /1. Paris: Seuil.Google Scholar
  36. Morin, E. (2001). La méethode – 5. L’humanitée de l’humanitée – L’identitée humaine. Paris: Seuil.Google Scholar
  37. Niemi, H., & Kemmis, S. (1999). Communicative evaluation – Evaluation at the crossroads. Lifelong Learning in Europe, 1, 55–64.Google Scholar
  38. Patton, M. Q. (1997). Utilization-focused evaluation – The new century text. Thousand Oaks: Sage.Google Scholar
  39. Pawson, R., & Tilley, N. (1997). Realistic evaluation. London: Sage.Google Scholar
  40. Radnofsky, M. L. (1996). Qualitative models: Visually representing complex data in an image/text balance. Qualitative Inquiry, 2, 385–410.CrossRefGoogle Scholar
  41. Renger, R., & Titcomb, A. (2002). A three-step approach to teaching logic models. American Journal of Evaluation, 23, 493–503.Google Scholar
  42. Ribeill, G. (1974). Tensions et mutations sociales. Paris: Presses universitaires de France.Google Scholar
  43. Ryan, K. (2004). Serving public interests in educational accountability: Alternative approaches to democratic evaluation. American Journal of Evaluation, 25, 443–460.Google Scholar
  44. Ryan, K., & De Stefano, L. (2001). Dialogue as a democratizing evaluation method. Evaluation, 7, 188–203.CrossRefGoogle Scholar
  45. Scriven, M. (1998). Minimalist theory: The least theory that practice requires. American Journal of Evaluation, 19, 57–70.Google Scholar
  46. Simons, H., & McCormack, B. (2007). Integrating arts-based inquiry in evaluation methodology: Opportunities and challenges. Qualitative Inquiry, 13, 292–311.CrossRefGoogle Scholar
  47. Smith, M. F. (1989). Evaluability assessment – A practical approach. Boston: Kluwer.Google Scholar
  48. Themessl-Huber, M. T., & Grutsch, M. A. (2003). The shifting locus of control in participatory evaluations. Evaluation, 9, 92–111.CrossRefGoogle Scholar
  49. Touraine, A. (1973). Production de la sociéetée. Paris: Seuil.Google Scholar
  50. Van der Meer, F. B., & Edelenbos, J. (2006). Evaluation in multi-actor policy processes. Evaluation, 12, 201–218.CrossRefGoogle Scholar
  51. Walliser, B. (1977). Systèemes et modèeles – Introduction critique à l’analyse des systèemes. Paris : Seuil.Google Scholar
  52. Weiss. C. H. (1983a). The stakeholder approach in evaluation: Origins and promise. New Directions for Program Evaluation, 17, 3–12.CrossRefGoogle Scholar
  53. Weiss. C. H. (1983b). Toward the future of stakeholder approach in evaluation. New Directions for Program Evaluation, 17, 83–96.CrossRefGoogle Scholar
  54. W. K. Kellogg Foundation. (2001). Using logic models to bring together planning, evaluation, & action – Logic model development guide. Michigan.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • Denis Allard
    • 1
  • Angèle Bilodeau
  • Sylvie Gendron
  1. 1.Universityé de MontréalMontrealCanada

Personalised recommendations