CUTOS: A Framework for Contextualizing Evidence
- 31 Downloads
Researchers often emphasize the need for evidence from previous research to understand what is already known about a specific program or intervention. Even in the presence of high-quality evidence, consumers of research often find it difficult to interpret and apply the findings to their specific context. Many fields are incorporating evidence-based practices and the use of systematic reviews to understand what the evidence supports. However, there is little in the literature that tells us how to interpret the findings based on the intended application being considered, despite stressing the importance of doing so. This chapter introduces a framework that gives us a set of tools for disentangling what is known and a space to contemplate whether and which findings are applicable to a specific context. This implementation framework starts by identifying known evidence about what is going to be implemented (i.e., what the evidence supports) and how the components of such evidence apply to a specific context (population, organization, resources, etc.). In addition, this active implementation framework emphasizes the importance of collaborative decision-making between researchers and stakeholders.
KeywordsImplementation Generalization Systematic review Meta-analysis
- Becker, B. J. (1996). The generalizability of empirical research results. In C. P. Benbow & D. Lubinski (Eds.), Intellectual talent: Psychometric and social issues (pp. 362–383). Baltimore, MD: Johns Hopkins Press.Google Scholar
- Burchett, H., Umoquit, M., & Dobrow, M. (2011). How do we know when research from one setting can be useful in another? A review of external validity, applicability and transferability frameworks. Journal of Health Services Research and Policy, 16, 238–244. https://doi.org/10.1258/jhsrp.2011.010124CrossRefGoogle Scholar
- Campbell, D., & Stanley, J. (1963). Experimental and quasi-experimental designs for research. Chicago, IL: Rand-McNally.Google Scholar
- Chandler, J., Churchill, R., Higgins, J. P. T., Lasserson, T., & Tovey, D. (2013). Methodological Expectations of Cochrane Intervention Reviews (MECIR). Methodological standards for the conduct of new Cochrane Intervention Reviews.Google Scholar
- Cook, T. D. (1991). Meta-analysis: Its potential for causal description and causal explanation within program evaluation. In G. Albrecht, H. U. Otto, S. Karstedt-Henke, & K. Bollert (Eds.), Social prevention and the social sciences: Theoretical controversies, research problems and evaluation strategies.Google Scholar
- Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston, MA: Houghton Mifflin Company.Google Scholar
- Cooper, H. M., Hedges, L. V., & Valentine, J. (2009). The handbook of research synthesis and meta-analysis (2nd ed.). New York, NY: The Russell Sage Foundation.Google Scholar
- Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco, CA: Jossey-Bass.Google Scholar
- Des Jarlais, D. C., Lyles, C., Crepaz, N., & the TREND Group. (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 94, 361–366. https://doi.org/10.2105/ajph.94.3.361CrossRefGoogle Scholar
- Ehri, L. C., Nunes, S. R., Willows, D. M., Schuster, B. V., Yaghoub-Zadeh, Z., & Shanahan, T. (2001). Phonemic awareness instruction helps children learn to read: Evidence from the National Reading Panel’s meta-analysis. Reading Research Quarterly, 36(3), 250–287. https://doi.org/10.1598/RRQ.36.3.2CrossRefGoogle Scholar
- Ercikan, K., & Roth, W.-M. (2014). Limits of generalizing in education research: Why criteria for research generalization should include population heterogeneity and uses of knowledge claims. Teachers College Record, 116, 1–28.Google Scholar
- Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5(10), 3–8. https://doi.org/10.3102/0013189X005010003.
- GRADE working group. (n.d.). Organizations that have endorsed or that are using GRADE. Available from: www.gradeworkinggroup.org
- Guyatt, G., Oxman, A. D., Akl, E. A., Kunz, R., Vist, G., Brozek, J., … Schunemann, H. J. (2011). GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. Journal of Clinical Epidemiology, 64, 383–394. https://doi.org/10.1016/j.jclinepi.2010.04.026CrossRefGoogle Scholar
- Higgins, J. P. T., & Green, S. (2011). Cochrane handbook for systematic reviews of interventions. Retrieved from www.cochrane-handbook.org
- National Institute of Child Health and Human Development (NICHD). (2000). Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction: Reports of the subgroups (NIH Publication No. 00-4754). Washington, DC: U.S. Government Printing Office.Google Scholar
- Richardson, W. S., Wilson, M. C., Nishikawa, J., & Hayward, R. S. A. (1995). The well-built clinical question: A key to evidence-based decisions. ACP Journal Club, 123, A-12. https://doi.org/10.7326/ACPJC-1005-123-3-A12.
- Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental design for generalized causal inference. Boston, MA: Houghton-Mifflin.Google Scholar
- Stroup, D. F., Berlin, J. A., Morton, S. C., Olkin, I., Williamson, G. D., Rennie, D., … Thacker, S. B. (2000). Meta-analysis of observational studies in epidemiology: A proposal for reporting. Journal of the American Medical Association, 283, 2008–2012. https://doi.org/10.1001/jama.283.15.2008CrossRefGoogle Scholar