Advertisement

The Journal of Primary Prevention

, Volume 40, Issue 1, pp 89–109 | Cite as

Classifying Changes to Preventive Interventions: Applying Adaptation Taxonomies

  • Joseph N. RoscoeEmail author
  • Valerie B. Shapiro
  • Kelly Whitaker
  • B. K. Elizabeth Kim
Original Paper

Abstract

High-quality implementation is important for preventive intervention effectiveness. Although this implies fidelity to a practice model, some adaptation may be inevitable or even advantageous in routine practice settings. In order to organize the study of adaptation and its effect on intervention outcomes, scholars have proposed various adaptation taxonomies. This paper examines how four published taxonomies retrospectively classify adaptations: the Ecological Validity Framework (EVF; Bernal et al. in J Abnorm Child Psychol 23(1):67–82, 1995), the Hybrid Prevention Program Model (HPPM; Castro et al. in Prev Sci 5(1):41–45, 2004.  https://doi.org/10.1023/B:PREV.0000013980.12412.cd), the Moore et al. (J Prim Prev 34(3):147–161, 2013.  https://doi.org/10.1007/s10935-013-0303-6) taxonomy, and the Stirman et al. (Implement Sci 8:65, 2013.  https://doi.org/10.1186/1748-5908-8-65) taxonomy. We used these taxonomies to classify teacher-reported adaptations made during the implementation of TOOLBOX™, a social emotional learning program implemented in 11 elementary schools during the 2014–2015 academic year. Post-implementation, 271 teachers and staff responded to an online survey that included questions about adaptation, yielding 98 adaptation descriptions provided by 42 respondents. Four raters used each taxonomy to try to classify these descriptions. We assessed the extent to which raters agreed they could classify the descriptions using each taxonomy (coverage), as well as the extent to which raters agreed on the subcategory they assigned (clarity). Results indicated variance among taxonomies, and tensions between the ideals of coverage and clarity emerged. Further studies of adaptation taxonomies as coding instruments may improve their performance, helping scholars more consistently assess adaptations and their effects on preventive intervention outcomes.

Keywords

Adaptation Implementation Measurement Prevention Social and emotional learning 

Notes

Acknowledgements

This research was funded by the Stuart Foundation and a Hellman Foundation Graduate Fellow Award. We thank Mark Collin, Dr. Chuck Fisher, Pamela McVeagh-Lally, and our colleagues at the UC Berkeley Center for Prevention Research in Social Welfare (especially Dr. Sarah Accomazzo and Kimberly Knodel) for their contributions to this work. We also thank Dr. Stacey Alexeeff for statistical consultation. Finally, we thank the administrators, teachers, and staff who participated in this research, and Catherine Rodecker and Dr. Kathryn Mapps for their implementation and evaluation leadership. Aspects of this paper were previously presented at the 2016 Society for Prevention Research conference in San Francisco and the 2017 Society for Social Work and Research conference in New Orleans. All research protocols were approved by the Committee for the Protection of Human Subjects (CPHS) at the University of California, Berkeley.

Compliance With Ethical Standards

Conflict of Interest

The authors declare they have no conflicts of interest.

Ethical Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed Consent

Informed consent was obtained from all individual participants included in the study.

References

  1. Aarons, G. A., Green, A. E., Palinkas, L. A., Self-Brown, S., Whitaker, D. J., Lutzker, J. R., et al. (2012). Dynamic adaptation process to implement an evidence-based child maltreatment intervention. Implementation Science, 7(1), 32.  https://doi.org/10.1186/1748-5908-7-32.Google Scholar
  2. Baumann, A. A., Powell, B. J., Kohl, P. L., Tabak, R. G., Penalba, V., Proctor, E. K., et al. (2015). Cultural adaptation and implementation of evidence-based parent-training: A systematic review and critique of guiding evidence. Children and Youth Services Review, 53, 113–120.  https://doi.org/10.1016/j.childyouth.2015.03.025.Google Scholar
  3. Bernal, G., Bonilla, J., & Bellido, C. (1995). Ecological validity and cultural sensitivity for outcome research: Issues for the cultural adaptation and development of psychosocial treatments with hispanics. Journal of Abnormal Child Psychology, 23(1), 67–82.Google Scholar
  4. Bernal, G., & Domenech Rodríguez, M. M. (2012). Cultural adaptations: Tools for evidence-based practice with diverse populations. Washington, DC: American Psychological Association.Google Scholar
  5. Castro, F. G., Barrera, M., Jr., & Martinez, C. R., Jr. (2004). The cultural adaptation of prevention interventions: Resolving tensions between fidelity and fit. Prevention Science, 5(1), 41–45.  https://doi.org/10.1023/B:PREV.0000013980.12412.cd.Google Scholar
  6. Center for Prevention Implementation Methodology. (n.d.). Retrieved July 13, 2018, from http://cepim.northwestern.edu/.
  7. Chambers, D. A., & Norton, W. E. (2016). The adaptome. American Journal of Preventive Medicine, 51(4), S124–S131.  https://doi.org/10.1016/j.amepre.2016.05.011.Google Scholar
  8. Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46.Google Scholar
  9. Colby, M., Hecht, M. L., Miller-Day, M., Krieger, J. L., Syvertsen, A. K., Graham, J. W., et al. (2013). Adapting school-based substance use prevention curriculum through cultural grounding: A review and exemplar of adaptation processes for rural schools. American Journal of Community Psychology, 51, 190–205.  https://doi.org/10.1007/s10464-012-9524-8.Google Scholar
  10. Collin, M. A. (2015). TOOLBOX™ Primer. Sebastopol, CA: Dovetail Learning Inc.Google Scholar
  11. Cooper, B. R., Shrestha, G., Hyman, L., & Hill, L. (2016). Adaptations in a community-based family intervention: Replication of two coding schemes. The Journal of Primary Prevention, 37(1), 33–52.  https://doi.org/10.1007/s10935-015-0413-4.Google Scholar
  12. Dusenbury, L., Brannigan, R., Hansen, W. B., Walsh, J., & Falco, M. (2005). Quality of implementation: Developing measures crucial to understanding the diffusion of preventive interventions. Health Education Research, 20(3), 308–313.  https://doi.org/10.1093/her/cyg134.Google Scholar
  13. Elliott, D. S., & Mihalic, S. (2004). Issues in disseminating and replicating effective prevention programs. Prevention Science.  https://doi.org/10.1023/b:prev.0000013981.28071.52.Google Scholar
  14. Ferrer-Wreder, L., Sundell, K., & Mansoory, S. (2012). Tinkering with perfection: Theory development in the intervention cultural adaptation field. Child and Youth Care Forum, 41(2), 149–171.  https://doi.org/10.1007/s10566-011-9162-6.Google Scholar
  15. Forgatch, M. S., Patterson, G. R., & DeGarmo, D. S. (2005). Evaluating fidelity: Predictive validity for a measure of competent adherence to the oregon model of parent management training. Behavior Therapy, 36(1), 3–13.Google Scholar
  16. Glasgow, R. E., & Emmons, K. M. (2007). How can we increase translation of research into practice? Types of evidence needed. Annual Review of Public Health, 28(1), 413–433.  https://doi.org/10.1146/annurev.publhealth.28.021406.144145.Google Scholar
  17. Goncy, E. A., Sutherland, K. S., Farrell, A. D., Sullivan, T. N., & Doyle, S. T. (2015). Measuring teacher implementation in delivery of a bullying prevention program: The impact of instructional and procedural adherence and competence on student responsiveness. Prevention Science, 16(3), 440–450.  https://doi.org/10.1007/s11121-014-0508-9.Google Scholar
  18. Gould, S. J. (2011). Full house. Cambridge: Harvard University Press.Google Scholar
  19. Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: An overview and tutorial. Tutorials in Quantitative Methods for Psychology, 8(1), 23.Google Scholar
  20. Hansen, W. B., Pankratz, M. M., Dusenbury, L., Giles, S. M., Bishop, D. C., Albritton, J., et al. (2013). Styles of adaptation: The impact of frequency and valence of adaptation on preventing substance use. Health Education, 113(4), 345–363.Google Scholar
  21. Keith, R. E., Hopp, F. P., Subramanian, U., Wiitala, W., & Lowery, J. C. (2010). Fidelity of implementation: Development and testing of a measure. Implementation Science, 5(1), 99.  https://doi.org/10.1186/1748-5908-5-99.Google Scholar
  22. Kemp, L. (2016). Adaptation and fidelity: A recipe analogy for achieving both in population scale implementation. Prevention Science, 17(4), 429–438.  https://doi.org/10.1007/s11121-016-0642-7.Google Scholar
  23. Landis, J. R., & Koch, G. G. (1977). An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers. Biometrics, 33(2), 363–374.Google Scholar
  24. Leong, F. T., & Lee, S.-H. (2006). A cultural accommodation model for cross-cultural psychotherapy: Illustrated with the case of Asian Americans. Psychotherapy: Theory, Research, Practice, Training, 43(4), 410.Google Scholar
  25. Lewis, C. C., Stanick, C. F., Martinez, R. G., Weiner, B. J., Kim, M., Barwick, M., et al. (2015). The society for implementation research collaboration instrument review project: A methodology to promote rigorous evaluation. Implementation Science, 10(1), 2.  https://doi.org/10.1186/s13012-014-0193-x.Google Scholar
  26. Lillehoj, C. J., Griffin, K. W., & Spoth, R. (2004). Program provider and observer ratings of school-based preventive intervention implementation: Agreement and relation to youth outcomes. Health Education and Behavior, 31(2), 242–257.  https://doi.org/10.1177/1090198103260514.Google Scholar
  27. Moore, J. E., Bumbarger, B. K., & Cooper, B. R. (2013). Examining adaptations of evidence-based programs in natural contexts. The Journal of Primary Prevention, 34(3), 147–161.  https://doi.org/10.1007/s10935-013-0303-6.Google Scholar
  28. Mowbray, C. T., Holter, M. C., Teague, G. B., & Bybee, D. (2003). Fidelity criteria: Development, measurement, and validation. American Journal of Evaluation, 24(3), 315–340.  https://doi.org/10.1177/109821400302400303.Google Scholar
  29. O’Connell, M. E. (2009). Preventing mental, emotional, and behavioral disorders among young people: Progress and possibilities. Washington, DC: National Academies Press.Google Scholar
  30. Ogden, T., Bjørnebekk, G., Kjøbli, J., Patras, J., Christiansen, T., Taraldsen, K., et al. (2012). Measurement of implementation components ten years after a nationwide introduction of empirically supported programs: A pilot study. Implementation Science, 7(1), 49.  https://doi.org/10.1186/1748-5908-7-49.Google Scholar
  31. Ozer, E. J., Wanis, M. G., & Bazell, N. (2010). Diffusion of school-based prevention programs in two urban districts: Adaptations, rationales, and suggestions for change. Prevention Science, 11(1), 42–55.  https://doi.org/10.1007/s11121-009-0148-7.Google Scholar
  32. Rabin, B. A., Purcell, P., Naveed, S., Moser, R. P., Henton, M. D., Proctor, E. K., et al. (2012). Advancing the application, quality and harmonization of implementation science measures. Implementation Science: IS, 7, 119.  https://doi.org/10.1186/1748-5908-7-119.Google Scholar
  33. Resnicow, K., Davis, M., Smith, M., Lazarus-Yaroch, A., Baranowski, T., Baranowski, J., et al. (1998). How best to measure implementation of school health curricula: A comparison of three measures. Health Education Research, 13(2), 239–250.Google Scholar
  34. Rotheram-Borus, M. J., Swendeman, D., & Chorpita, B. F. (2012). Disruptive innovations for designing and diffusing evidence-based interventions. American Psychologist, 67(6), 463–476.  https://doi.org/10.1037/a0028180.Google Scholar
  35. Schober, I., Sharpe, H., & Schmidt, U. (2013). The reporting of fidelity measures in primary prevention programmes for eating disorders in schools. European Eating Disorders Review.  https://doi.org/10.1002/erv.2243.Google Scholar
  36. Shapiro, V. B., Kim, B. K. E., Accomazzo, S., & Roscoe, J. N. (2016). Predictors of rater bias in the assessment of social-emotional competence. International Journal of Emotional Education, 8(2), 25.Google Scholar
  37. Spoth, R., Rohrbach, L. A., Greenberg, M., Leaf, P., Brown, C. H., Fagan, A., et al. (2013). Addressing core challenges for the next generation of type 2 translation research and systems: The translation science to population impact (tsci impact) framework. Prevention Science, 14(4), 319–351.  https://doi.org/10.1007/s11121-012-0362-6.Google Scholar
  38. SPR MAPS II Task Force. (2008). Type 2 translational research: Overview and definitions. Retrieved June 13, 2017, from http://www.preventionresearch.org/SPR_Type2TranslationResearch_OverviewandDefinition.pdf.
  39. Stirman, S. W., Miller, C. J., Toder, K., & Calloway, A. (2013). Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implementation Science, 8, 65.  https://doi.org/10.1186/1748-5908-8-65.Google Scholar
  40. Wiltsey Stirman, S., Gutner, C. A., Crits-Christoph, P., Edmunds, J., Evans, A. C., & Beidas, R. S. (2015). Relationships between clinician-level attributes and fidelity-consistent and fidelity-inconsistent modifications to an evidence-based psychotherapy. Implementation Science, 10(1), 115.  https://doi.org/10.1186/s13012-015-0308-z.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Center for Prevention Research in Social WelfareUniversity of California Berkeley School of Social WelfareBerkeleyUSA
  2. 2.School Mental Health Assessment, Research, and Training (SMART) Center, Department of Psychiatry and Behavioral SciencesUniversity of WashingtonSeattleUSA
  3. 3.USC Suzanne Dworak-Peck School of Social WorkUniversity of Southern CaliforniaLos AngelesUSA

Personalised recommendations