Advertisement

Prevention Science

, Volume 20, Issue 8, pp 1178–1188 | Cite as

Commentary on Scaling-Up Evidence-Based Interventions in Public Systems

  • Cynthia Weaver
  • Melissa E. DeRosierEmail author
Article

Abstract

This commentary reviews the findings and recommendations from the Society for Prevention Research (SPR) Mapping Advances in Prevention Science (MAPS) IV Task Force as detailed in the article by Fagan et al. (Prevention Science, 2019). In addition to highlighting similarities and differences with the prior MAPS II Task Force findings, discussion focuses on four extension recommendations: the importance of attending to equitable implementation as a pathway to evidence-based intervention (EBI) uptake by communities, the value of broadening conceptualization of data monitoring and evaluation capacity beyond EBI-specific data, the opportunity to more precisely guide EBI-referrals from public systems, and the importance of EBI developers and purveyors proactively and deliberately operationalizing fidelity measurement using actionable constructs.

Keywords

Scaling-up Prevention Implementation Evidence-based programs Public systems Child welfare Juvenile justice 
Effective and sustained scale-up of evidence-based interventions (EBIs) to improve population-level well-being continues to elude too many public systems in the USA (Catalano et al. 2012; Hawkins et al. 2015). While research evidence supports the efficacy of numerous EBIs at all three levels of prevention (universal, selective, and indicated), broad-scale everyday use in US public systems lags severely behind. To this end, the Society for Prevention Research (SPR) convened the Mapping Advances in Prevention Science (MAPS) IV Translation Research Task Force in 2016 to consider ways to scale-up EBIs in five public systems (behavioral health, child welfare, education, juvenile justice, and public health). The MAPS IV Task Force (Fagan et al. 2019) recommended three action steps to “move the needle” on population-level well-being through EBI scale-up, specifically:
  1. 1.

    Increase public policies and funding to support creation, testing, and scaling-up of EBIs

     
  2. 2.

    Develop and evaluate specific frameworks to foster EBI scale-up and address systems-level barriers

     
  3. 3.

    Promote public support and community capacity for EBIs at scale, including partnerships between community stakeholders, policy makers, practitioners, and scientists

     

Common Factors That Influence Scale-Up

To inform these three action steps, the MAPS IV Task Force identified a common set of seven factors across five public systems that affect scale-up: (1) the degree to which systems enact public policies that require, recommend, and fund EBIs; (2) the degree to which EBIs are ready for scale-up; (3) public awareness and support for EBIs; (4) local engagement and capacity to implement EBIs; (5) the degree to which public system leadership supports EBIs; (6) availability of a skilled workforce to implement EBIs; and (7) data monitoring and evaluation capacity. In their paper, the MAPS IV authors used an ecological model to visualize the influence of these factors, where public policies and funding represent the most influential macro factor that envelopes all other scale-up factors (see Fagan et al. 2019). Alternatively, the MAPS IV Task Force authors note that this ecological model can also be interpreted from the micro level outward, with an initial focus on a specific EBI.

Infrastructure as a Common Theme Between MAPS IV and MAPS II Task Forces

The MAPS IV paper (Fagan et al. 2019) builds on the previous findings of the SPR MAPS II Translational Research Task Force, which operated from 2008 to 2014 and recommended a two-pronged research agenda to address barriers to adopting, implementing, and sustaining EBIs. The first MAPS II recommendation focused on building local infrastructure to support EBIs. The second recommendation focused on developing research questions relevant to real-world implementation of EBIs and scientific methods to answer those questions.

It is noteworthy that the need to build infrastructure to support EBI implementation and scale-up is a shared theme in the MAPS II and MAPS IV Task Force recommendations. For MAPS II, this need is characterized at the micro level as a local infrastructure support, whereas in the recent MAPS IV report, these scale-up factors are more outward facing and move us beyond the MAPS II micro layer of local infrastructure to gaps at the mezzo and macro levels. Several of the MAPS IV factors that affect scale-up cluster around the previous MAPS II call for infrastructure development, such as public policies and funding that prioritize use of EBIS, need for EBIs to be ready to scale-up, need for leadership support and capacity to implement EBIs including a skilled frontline provider workforce, and need for data monitoring and evaluation to be integrated into implementation. In this commentary, we will review the MAPS IV scale-up factors exclusively from the perspective of infrastructure development needs.

MAPS IV Common Set of Scale-up Factors Across Public Systems

Policy and Funding as a Scale-Up Factor

The degree to which systems enact public policies that require, recommend, or provide funding for EBIs in the USA can be conceptualized as a macro federal or state-level infrastructure factor affecting scale-up (Fagan et al. 2019), and perhaps more importantly, sustainability. Statutory language at federal levels requiring use of EBIs is the most difficult to change and as a result is a scale-up factor with the greatest potential to foster sustainability. MAPS IV Task Force authors provide a helpful overview of various funding mechanisms that can support EBIs at the macro level, from federal mandates like the recent Title IV-E Waiver Family First Services Prevention Services Act (FFPSA) to federal or state discretionary grants and even Pay for Success models. We note that the recent landmark FFPSA legislation provides crucial opportunities for practice-based research in several contexts, at least three of which are (1) implementation studies of FFPSA-endorsed EBI replicated in real-world child welfare settings, (2) building evidence for programs reviewed for FFPSA endorsement but lacking evidence at the required standard, and (3) building evidence for prevention programs selected by state or tribal child welfare systems that are not—as yet—endorsed by FFPSA.

The MAPS IV authors also highlight funding opportunities that now require community representation, such as the Patient-Centered Outcome Research Institute or SAMHSA’s State/Tribal Planning grants. We would add that recent calls to action in the field of implementation science stress the need for equitable implementation (DuMont et al. 2019). Inclusion of an authentic community voice is an important aspect of that emerging movement. To this end, we encourage funders to incentivize community participation by authorizing funding for community members to participate in advisory councils, policy councils, planning groups, implementation teams, etc.

EBI Readiness for Scale as a Factor: EBI Developers, Purveyors, and Intermediaries

Readiness for scale too often focuses at the micro level on community readiness. Little consideration has been given to the readiness of EBI developers and purveyors to support implementation in communities, particularly their readiness to consider the culture and values of a community as important pathways to uptake and scale (DuMont et al. 2019). The MAPS IV authors provide helpful distinctions between three sources of EBI-specific readiness support: EBI developers, EBI purveyors, and EBI intermediaries. We will consider each, providing a few examples from our work that we consider promising.

EBI developers tend to be university-based and are involved in early development, testing, and larger-scale randomized controlled trials (RCT) for their EBIs. This early work is generally funded by federal research grants. In the early days of prevention science field-building, these EBI developers rarely considered scale-up as part of the early development and testing process. Many EBIs that were incubated in university settings still struggle to develop business models that can accommodate scale-up in real-world settings (Neuhoff et al. 2017). The MAPS IV authors are optimistic that the next generation of research will include consideration of scale-up in early development and testing of EBIs. We suspect that changes in research funding priorities and policy changes within academia will be necessary to incentivize a focus on community participatory scale-up studies by the next generation of EBI developers, many of whom are likely to be based in university settings. In particular, research funding that prioritizes community-grounded co-design or adaptation of EBIs will benefit from multiple and diverse perspectives to deepen our understanding of participant-centered mechanisms for change. It is likely policy changes within academia will be necessary to incentivize this next level of implementation research as these studies take time and require researchers to respect the processes and timeline of community stakeholders.

We provide a current example of “building-evidence pipeline” that began with a partnership between philanthropy and academia, ultimately resulting in a National Institute of Health (NIH) R01 randomized controlled trial. The Annie E. Casey Foundation funded the Johns Hopkins (JHU) School of Nursing to develop a culturally tailored prevention program for indigenous young parents and their 3- to 5-year-old Head Start children. Principal Investigator and indigenous Lakota Sioux researcher Dr. Teresa Brockie worked closely with the Fort Peck Assiniboine and Sioux community to co-design “Wa’ Kan ye’ zah (Little Holy One), an intergenerational prevention intervention that culturally adapts, enhances, and combines evidence-based content with indigenous cultural empowerment components to enhance uptake and relevance to the lived experience of participants. The program combines components of Common Elements Treatment Approach (CETA) and the JHU Family Spirit indigenous maternal home visiting program with tribal identity, communal mastery, smudging, and historical trauma. This example of Type III scaling-out (Aarons et al. 2017) secured NIH funding in 2019 to build out the infrastructure to test a frugal innovation to prevent Native American youth self-harm behaviors, starting in early life.

In contrast to university-based developers, EBI purveyors and intermediaries include a broad range of nonprofit and for-profit organizations that typically support training and coaching for specific EBIs, often at a national level. Multi-systemic Therapy (MST) Services are an early example of a for-profit purveyor responsible for national and international dissemination of one EBI beyond the original university developer setting. On the other hand, EBI intermediaries typically support more than one EBI across a state or a region. Some intermediaries are university-based “centers for excellence” and some are stand-alone nonprofits or for-profits. An important consideration for intermediaries, as they coordinate implementation technical assistance for the EBIs they support, is the degree to which the developer deliberately readied that EBI for broad-scale use.

Level of readiness for scale is particularly relevant when state-based intermediaries support community-based service providers who work through contract with a public system to implement specific EBIs selected and funded by the system. EBI developer or purveyor readiness for scale may vary widely. Nimble and innovative intermediaries are able to adjust to developer or purveyor readiness gaps and develop tools and coaching strategies to accommodate scale aligned with their place-based context. Intermediaries are often uniquely poised to identify these EBI purveyor readiness gaps based on site-based context because they are typically closer to the front lines of EBI implementation than developers or purveyors.

The EPISCenter at Penn State University (episcenter.psu.edu) is a good example of a university-based center for excellence that supports specific EBIs across a state. We chose the EPISCenter as our example because it is unique. The EPISCenter supports implementation of nearly 250 replications of seventeen different EBIs across the state of Pennsylvania, guided by the Communities that Care© (CTC) operating system for prevention planning (sdrg.org/CTCInterventions.asp). Funded jointly by the Pennsylvania Commission on Crime and Delinquency (PCCD) and the Pennsylvania Departments of Human Services and Drug and Alcohol Programs, the EPISCenter is an intermediary for a diverse portfolio of EBIs endorsed on the Blueprints for Healthy Youth Development registry. The EPISCenter portfolio of supported programs reflects specific state- and community-identified needs and is balanced in terms of ages, settings, and type of prevention (universal, selective, indicated). EBIs are chosen by multi-sector community coalitions based on the results of the Pennsylvania Youth Survey (PAYS), a risk and protective factor assessment based on the CTC© Youth Survey and planning process. The outcomes impacted by supported EBIs align with the priority youth outcomes, risks, and protective factors unique to each community. This type of community engagement and data-informed EBI selection is an important pathway to uptake. Not every EBI in Pennsylvania is selected through the CTC process. PCCD also allows other similar strategic planning processes, such as PROSPER, Drug-free Communities, and the SAMHSA Strategic Prevention Framework, but the majority use CTC and the PAYS survey.

In addition, Invest in Kids in Colorado (iik.org/our-programs) is a good example of a nimble state-based nonprofit intermediary using innovative strategies to scale-up multiple EBIs. Invest in Kids developed and delivers scalable implementation support tools and strategies for The Incredible Years and Nurse-Family Partnership based on the Active Implementation Framework (Fixsen et al. 2013). These tools align with state and local contexts in Colorado. For example, to serve the whole state of Colorado effectively, remote coaching is used in some cases. As part of this commentary, we encourage EBI developers and purveyors to reflect upon their readiness or appetite to support scale-up in real-world contexts. When EBI purveyors acknowledge gaps, nimble intermediaries—particularly those with community-grounded relationships and sensitivity to local context—can be valuable mechanisms for scale-up.

At the beginning of this section on Readiness Factors Related to Scale-Up, we agreed with DuMont et al. (2019) that implementation science too often perpetuates the power differential between communities receiving services and the EBI developer/university/research/funding pipeline. We would be remiss in failing to note that not all prevention programs are developed in university settings. The road is more difficult than it needs to be because early development and testing supports that accompany university-based EBI development and research are typically not available. Nonetheless, there are some promising examples of community- or culturally grounded prevention programs that develop outside academia. The Annie E. Casey Foundation funds a group of grantees to encourage and support this pipeline. The funding portfolio, called Expanding Evidence, supports and build evidence for promising community- and culturally grounded prevention programs developed by people of color for people of color. One culturally grounded prevention program that the Casey Foundation supports, Latinos in Action, offers an asset-based approach to bridging the graduation and opportunity gap for Latino students, working from within the educational system to create positive change (https://latinosinaction.org). The program operates as a year-long elective course taught by a highly-qualified teacher at the middle school, junior high, and high school level. They have expanded to eight states and demonstrated results by having participants successfully complete post-secondary education and earn a livable wage at a higher rate than the national average. Although the call to action for equitable implementation is still in the early days of consensus building, we are hopeful that one aspect of this new movement includes practice-research partnerships to build evidence for the development and testing of community- or culturally grounded prevention programs.

Public Awareness and Support for EBIs as a Scale-Up Factor

The MAPS IV authors also considered online registries and clearinghouse lists of evidence-based programs as possible pathways to raise public awareness of EBIs. These searchable databases also provide essential resources for public system leaders in their efforts to identify EBIs that address behavioral health outcomes relevant to their consumer populations. Searching and deciphering the peer-reviewed literature to discern which interventions qualify as “evidence-based” for which target outcomes is a time-consuming and daunting endeavor for anyone and can be particularly overwhelming for non-researchers, community members, and public system leaders alike. As such, registries and clearinghouses provide a significant community service. We note that when presented in visually engaging easy to access formats, these registries streamline the search and filtering process to facilitate identification of EBIs that meet specific public system or community needs.

The California Evidence-based Clearinghouse (CEBC) is a good example of an EBI clearinghouse that focuses exclusively on one public system service delivery area—child welfare (cebc4cw.org). Although the CEBC is likely the “go to” searchable database for child welfare professionals, the site also offers information guides for diverse audiences interested in EBIs specific to child welfare settings. Each program on CEBC receives a Scientific Rating of 1–5 (with 1 being Well-Supported and 5 being a Concerning Practice) on outcomes relative to the topic area criteria. Programs also receive a Child Welfare Relevance level of high, medium, or low. It is noteworthy that a program on CEBC can receive a high level of Child Welfare Relevance, even if that program is not yet able to receive any category of Scientific Rating based on the peer-reviewed research literature. Trauma Systems Therapy (TST) is such an example. TST was developed for and implemented with child welfare populations and received a CEBC Child Welfare Relevance rating of High; however, TST does not yet have an evaluation with evidence rigorous enough for a CEBC Scientific Rating (Bartlett and Rushovich 2018; Redd et al. 2017).

Importantly, CEBC also makes a Cultural Resources Reference List available with citations and abstracts from articles published in the peer-reviewed literature about culture as it relates to evidence-based practice. This includes articles covering issues related to cultural adaptation, effectiveness of EBPs in cultural minority groups, and the engagement and retention of cultural minority groups in EBPs. The list is divided into two sections. The first section examines culture and evidence-based practice in general, and the second focuses on culture and specific evidence-based interventions. Perhaps researchers and funders interested in building the supply of well-supported, culturally grounded EBIs available to child welfare involved families will take note of evaluation gaps for these programs. This opportunity seems particularly relevant given the reach and influence of FFPSA in preventive child welfare services.

In addition, the Blueprints for Healthy Youth Development clearinghouse includes EBIs across a broad range of outcomes, including violence, delinquency, drug use, improved mental or physical health, and educational achievement. Based on a rigorous review of the peer-reviewed research literature, Blueprints certifies programs at three levels: Model Plus, Model, and Promising. A unique feature of the Blueprints searchable database is EBI selection based on youth outcomes, risks, and protective factors that can be measured at the population-level through administration of youth surveys. Two youth surveys that align with the Blueprints searchable database filters are the CTC© Youth Survey and the Youth Experience Survey (YES). As part of the larger CTC operating system, school-based administration and analysis of the CTC youth survey provides a population-level diagnostic to guide community-based decision-making about EBIs that align with priority youth outcomes, risks, and protective factors in a community. The Social Development Research Group (SDRG) at the University of Washington provides training and implementation support for communities to install the CTC operating system.

Similarly, the Youth Experience Survey (YES) is part of the Casey Foundation’s Evidence2Success® initiative. Similar to the CTC survey, the YES provides a population-level diagnostic of youth outcomes, risk, and protective factors for youth in a community. Aggregate youth outcomes from the survey are reported in five categories of lived experience: school, community, family, peers, and individual. These categories directly map onto the search filters for locating EBIs on the Blueprints registry. The Evidence2Success prevention planning process brings together public systems, community-based service provider organizations and community stakeholders to partner in data-driven decision-making to choose EBIs that align with their own community-based priority youth outcomes, risks, and protective factors. One objective of Evidence2Success is to lift up community voice and lived experience as an equal part of the prevention planning process, in partnership with public systems. The YES is made available as an open source “stand-alone” tool by the Foundation.

One important EBI registry currently under development is the Title IV-E Prevention Services Clearinghouse. This clearinghouse is supported by the Administration for Children and Families and coordinated by Abt Associates. EBIs endorsed by the clearinghouse will qualify for partial reimbursement with Family First Services Prevention Act (FFPSA) funds when they are part of a child welfare system’s official FFPSA plan. The Title IV-E Prevention Services Clearinghouse (PSC) is likely an “EBI awareness tool” for child welfare system leaders rather than a general public awareness tool. As such, the Title IV-E Prevention Services Clearinghouse represents aspects of at least three scale-up factors identified in the MAPS IV work: policy and funding and public system leadership.

CEBC and Blueprints include some information about dissemination readiness, such as the availability of implementation supports and fidelity tools, and contact information for EBI-specific training. We anticipate the same will be available on the Title IV-E Prevention Services Clearinghouse. We encourage these EBI registries to consider use of “at a glance” readiness graphics to highlight level of EBI dissemination readiness, although at this time, the availability of dissemination readiness materials is spotty. A recent review by the CEBC found that over 30% of programs rated on their site did not have fidelity measures and about 40% did not offer any support for implementation beyond the program manual. Regardless, there is an important opportunity for these registries to move beyond operationalizing implementation readiness in terms of traditional manuals and on-site training to assess readiness for scale. For example, including the availability of EBI-specific training innovations—like web-based operating systems and precision coaching tools made available to end users at low cost—would add an important dimension to public system leader and community coalition decision-making about adoption of an EBI based on levels of readiness for scale-up. Updating these levels of dissemination readiness at least annually based on new EBI-specific information would also be an important public service. The MAPS IV Task Force recommended development of a new framework for scale-up. Perhaps consideration of these innovative readiness tools can be included in that framework.

Community Engagement and Capacity as a Scale-Up Factor

As we mentioned in the previous section, Communities that Care© and Evidence2Success are two approaches that use population-level youth survey data as part of a larger community engagement strategy for data-driven decision-making in choosing EBIs to align with a population-level diagnostic of youth in the community. Theoretically, increasing the availability of EBIs is embedded within the logic models of Evidence2Success, PROSPER, and CTC (Chilenski et al. 2019a; Welsh et al. 2016; Brown et al. 2011). Studies of the CTC and PROSPER models indicated decreased problem youth behaviors as an outcome of a structured community engagement in planning for and implementing EBIs (Hawkins et al. 2012; Spoth et al. 2015). Furthermore, some research showed stronger positive effects of CTC when CTC coalitions implemented EBIs (Chilenski et al. 2019b). These types of community partnerships build a sense of ownership and commitment to success within communities.

We note that EBI-specific implementation teams are another meaningful pathway to community engagement and scale-up. EBI implementation teams can be part of a larger structured decision-making process like CTC, Evidence2Success, or PROPSER, or stand-alone EBI implementation teams. In any instance, diverse membership on EBI implementation teams of eight to 10 members is key to leveraging multiple perspectives on EBI implementation as well as opportunities to enhance contextual fit through adaptation. The MAPS IV authors also discussed the utility of co-designing new EBIs and adaptations of existing EBIs as pathways to greater community engagement.

The work of the Children’s Youth Cabinet (CYC) (cycprovidence.org) in Rhode Island is a good example of an intermediary engaged in multiple pathways to community engagement: (1) a structured decision-making process for community engagement, (2) prioritization of culturally grounded EBI fit with local context during EBI selection, (3) use of EBI-specific implementation teams that include members with relevant lived experience, (4) an EBI workforce that reflects the racial/ethnic demographic of program participants, and (5) a youth-guided adaptation of an existing EBI based on local culture and context. The CYC is an intermediary supporting EBIs in RI selected as part of the Casey Foundation’s Evidence2Success initiative. The CYC delivered “backbone” convening and advocacy during 3 years of Evidence2Success initiative and continues to provide EBI-specific technical assistance for EBIs chosen as part of that prevention planning process, including CBITS (Cognitive Behavioral Intervention for Trauma in Schools), Strong African American Families, Familias Unidas, and Positive Action.

For example, the CYC co-designed and supports implementation of a youth-guided adaptation of the original CBITS model. The adaptation includes theater and performance artists of color (from the same neighborhoods as the students in group), who support the clinical facilitation of experiential “telling our story content.” The CYC exclusively engages clinicians of color with relevant lived experience as CBITS practitioners. This CBITS adaptation responded to initial pre/post youth engagement survey data from initial CBITS cohorts in Providence middle schools. Data indicated the need for more experiential activities and less didactic content was needed to engage and retain students, ultimately achieving desired outcomes. The CYC adaptation of CBITS included conversations with intervention developers at RAND and the purveyor. The CBITS adaptation in RI secured a SAMHSA National Child Traumatic Stress Network grant to scale-up the adapted version of CBITS across all middle schools in Providence as well as several new counties in RI.

Public System Leadership and Support for EBIs as a Scale-Up Factor

Building the capacity of public system leaders to advocate for and support scale-up is another important pathway to broad-scale implementation of EBIs. The Active Implementation Framework calls attention to capacity drivers at this level that are particularly relevant in initiating and sustaining EBI implementation (https://nirn.fpg.unc.edu/module-1/implementation-drivers; Fixsen et al. 2013). Given the average brief tenure of public system leaders nationally (Beitsch et al. 2006), it seems practical to focus implementation capacity-building on supervisors and senior leadership within a public system to support and sustain EBI scale-up. Developing EBI champions within a public system who are not appointees is a crucial “inside” strategy for continuity and sustainability of EBI scale-up over time.

Many public systems, such as child welfare, contract with community-based providers to implement specific EBIs. As a result, we note that attention must also be paid to building the broader capacity of public system supervisors to coordinate coaching for their staff to engage in the practices required to support community-based EBI implementation. These “inside” public system practices move beyond EBI-specific training and coaching. A webinar series hosted by the Annie E. Casey Foundation and the W.T. Grant Foundation provides a good example relevant to broader evidence-informed practices within child welfare that can support EBI sustainability. Leading with Evidence: Informing Practice with Research explores tools and resources for building effective partnerships between researchers and child welfare practitioners to increase the effective uptake and sustainability of evidence-based programs in child welfare. Hundreds of child welfare practitioners, supervisors, leaders, and researchers have participated in this 2018/2019 series (aecf.org/search?title=LeadingWithEvidence&fq[]=report_series_id:218).

A Skilled EBI Workforce as a Scale-Up Factor

Extending on the preceding section, public system leaders are responsible for coordinating training to ensure staff and external providers are prepared to support and/or deliver selected EBIs. Public systems and community-based service providers face significant challenges in achieving and maintaining a sufficiently well-trained frontline provider workforce. The MAPS IV authors refer to SAMHSA data indicating “96% of state mental health agencies rated staff readiness as ‘sometimes’ or ‘always’ a barrier to EBI implementation” (Fagan et al., p. 33, 2019). Pre-implementation EBI training is typically delivered by EBI developers or purveyors in person over one or more days at the implementation site/s or at the EBI purveyor site. While training is important to prepare providers for implementation of an EBI, the high level of staff turnover for public system frontline staff as well as community-based providers undermines the effect of the traditional in-person pre-implementation training model. As a result, a public system’s ability to maintain competency in EBI implementation across the workforce over time is compromised. Further, while initial implementation often begins with attention to fidelity and adherence to protocol, providers may inadvertently modify or omit EBI procedures over time. This “drift” in service delivery is a significant threat to producing desired outcomes (McHugh and Barlow 2010).1

There are multiple implementation support platforms that can develop, co-design, and/or host EBI workforce readiness and coaching tools. We encourage developers, purveyors, intermediaries, and prevention science researchers to investigate the efficacy of these tools in maintaining a skilled provider workforce and decreasing the likelihood of service drift. At their most basic, web-based e-learning systems can provide training for communities of practice, EBI-specific certification courses, and access to ongoing implementation supports (e.g., video- and text-based tips) for continuous quality improvement (CQI). Use of e-training can take the form of three models of support for EBI service providers. The fully independent e-learning model allows users to engage in training at their own pace in a fully independent fashion without use of in-person training components. This model would be selected if, for example, the goal is to replace in-person training workshops with an online version (e.g., to address turnover of staff). Online systems can also be used as a preparation for future learning model (Belenky and Nokes-Malach 2012) whereby users complete e-training modules prior to participation in in-person training. Online modules provide background didactic information and foundational understanding which set the stage for more intensive in-person training and practice. This model can maximize the impact of the in-person training experience as trainees arrive with a solid foundation and fully ready to learn and participate, while also decreasing time and expenses for trainees. And third, online systems can be used as a post-training e-learning model whereby learners attend a pre-implementation in-person training event and then the online resources are used to reinforce and extend learning (e.g., provide booster lessons), provide ongoing support and practice in learned techniques (e.g., virtual simulation exercises), and facilitate access to training materials (e.g., searchable resource center). This model is particularly useful for sustaining learning gains from the in-person training event and promoting effective real-world application of learned concepts.

The research on web-based training models is growing, including findings supporting equivalency in EBI uptake and knowledge gains resulting from online only compared to face-to-face training (DeRosier et al. 2011; Dimeff et al. 2009; Stein et al. 2015) as well as the added value of combining in-person training with post-training online resources for enhance professional development (DeRosier et al. 2011). Further, recent innovative e-learning platforms have the capacity to extend beyond passive online resources (where users must initiate use) to precision coaching tools whereby the system initiates learning opportunities to support high-quality implementation. For example, when EBI fidelity is operationalized in actionable ways, web-based systems can deploy alerts to the end-user practitioner as fidelity challenges emerge and provide links to specific implementation support resources (e.g., video demonstration) to address that challenge. Online decision-support tools can also be triggered by the system based on emergent client or family needs—as reported by clients and families—to support EBI practitioners in next session planning.

An example of this type of decision-support system is a collaboration between the John Hopkins Center for American Indian Health (JHCAIH) and Care4. JHCAIH and Care4 have partnered to customize the Care4 (http://care4soft.net) web-based implementation support platform for JHCAIH’s Family Spirit evidence-based indigenous maternal home visiting program (Barlow et al. 2015; Barlow et al. 2013; Mullany et al. 2012; Sexton et al. 2012), with funding support from the Casey Foundation. Rapid cycle assessment for emergent behavioral health needs in the family will be administered and monitored through the Care4 platform. Family Spirit developed curriculum algorithms based on evidence and stakeholder feedback (Haroz et al. 2019) allow for guided next session planning for home visitors based on real-time data. With continued support from the foundation, the Family Spirit Care4 platform is being piloted in an ongoing implementation trial in four Michigan tribal communities.

Data Monitoring and Evaluation Capacity as a Scale-Up Factor

Real-world data monitoring capacity was identified as a separate scale-up factor by the MAPS IV Task Force. As we have previously discussed, mechanisms to ongoing data collection, analysis, and reporting are certainly an integral part of EBI-specific readiness for scale. This capacity can be supported by EBI developers, purveyors, intermediaries, or a partnership among the three. As noted by the MAPS IV authors, better data systems are needed for both implementation and outcome tracking. Certainly, user-friendly methods that streamline EBI data collection, analysis, and real-time data use would support practitioner delivery and program management as well as a public system’s capacity to monitor short- and long-term outcomes at the EBI and population levels.

An example of such a real-world data system is 3C Institute’s IMPACT implementation support platform which has been iteratively developed, tested, and refined with diverse end users in school and community services settings over the past decade with funding from NIMH and NIDA (3cisd.com/implementation-support). The overarching goal of these federal funds is to lower the financial and time costs for creating a software data system for an EBI. Rather than the extremely expensive and time-consuming process needed to develop software from scratch, IMPACT is built in a modular fashion with a base technology scaffold for data collection, analysis, and reporting functions and built-in configuration tools for customization to the specific data needs of a given EBI. 3C’s experienced EBI translators work closely with EBI developers, purveyors, and intermediaries to translate existing measures and procedures into the IMPACT data system for assessing and tracking fidelity, progress, and outcomes data. To date, IMPACT has been applied to and customized for 10 different EBIs, including CBITS, Strong African American Families, and Achievement Mentoring. Funding of these translations has come from multiple sources (depending on the EBI), including federal grant funds, support from the Casey and self-funding by the EBI developer/purveyor.

Initial pilot testing indicates use of IMPACT results in significant change over time in EBI providers’ skilled use of data, including their ability to use data to make decisions regarding participants and their ability to interpret scores from assessments (DeRosier 2019a). Use of IMPACT was also seen as significantly more feasible, usable, and valuable compared to current methods (e.g., paper-and-pencil data collection, Excel spreadsheets) and resulted in significant gains in providers’ commitment to data collection and tracking tasks as part of their everyday service delivery (DeRosier 2019a). Further, among the many lessons learned during the course of translating multiple EBIs’ measures and methods into the IMPACT system, we would stress the need to utilize data collection methods that respect the EBI provider’s time and workflow needs, ensure data entry results in timely, meaningful, and actionable information for providers, and use data to drive provision of contextually relevant and timely CQI to effectively support providers in their delivery of the EBPP (DeRosier 2019b).

We also stress that interoperability with existing data systems is an important feature for customizable EBI web-based implementation support platforms to avoid duplicative data entry for end users. We encourage prevention scientists to investigate the use of these customized implementation support platforms to better understand mechanisms for end-user uptake to avoid the “if we build it they will come” conundrum. And we note that ideally data could also be combined across EBIs and across time to enable analysis of outcomes for youth and families across a community and across systems as part of larger coordinated prevention initiatives.

Leveraging Emerging Opportunities and Intersectionalities

Recent developments in customizable web-based platforms hold great promise for embedding data collection into the daily practice of service delivery for EBIs as well as building evidence for promising programs. Unfortunately, there are too many promising and even evidence-based programs that do not currently benefit from these technological advances. Although some national “model plus” EBIs make their proprietary data support platforms available to end-user service providers, this type of data collection and monitoring tool is not widely available for most EBIs. For the majority of EBI developers, purveyors, and intermediaries, building such a custom software application is cost- and time-prohibitive. As a result, EBIs too often lack the data monitoring, CQI, and evaluation capacities to support scale-up, much less scale-out (Aarons et al. 2017). This lack of equitable access and data sovereignty is particularly troubling for culturally grounded and promising indigenous programs looking to build evidence for their programs in order to access federal funding, such as FFPSA. In these instances, lack of access to culturally grounded EBI data sovereignty prevents credible messengers from telling their story.

Data system development work is particularly catalytic when constructs such as EBI fidelity are operationalized in meaningful and actionable ways. User-friendly, feasible data entry for EBI practitioners translates into data-informed real-time implementation support. Actionable implementation data can stimulate CQI by EBI developers, purveyors, intermediaries, or agency supervisors. As we noted in the Workforce Capacity section, at their most innovative, implementation support platforms leverage multiple perspectives to activate web-based precision coaching tools matched to the needs of the local EBI practitioner as well as the service recipient. These data generate participant-centered evidence for pathways to EBI scale-up. When participants enter their data via online platform, accessibility is maximized. Much of this real-time data can also inform outcome evaluations, moving the field beyond administrative data as the only outcome indicator. Certainly, tracking administrative outcomes provides accountability for achieving target outcomes and developing cost benefit analyses. Tracking correlated variables like youth engagement, caregiver engagement, sense of self-efficacy, and trauma and healing, to name just a few, provide important information about mechanisms for change.

It is noteworthy that these implementation support systems can be further extended to support scale-up at the macro level to support state-wide initiatives by customizing and standardizing outcome evaluation methods across supported EBIs. For example, Children’s Trust of South Carolina has adopted IMPACT, 3C Institute’s data collection and reporting platform, to support implementation of its affiliate programs, including Triple P (Positive Parenting Program) and the Maternal, Infant, and Early Childhood Visiting Program (MIECHV). Children’s Trust will also be using IMPACT to improve the quality of the workforce training it provides to organizations and agencies that work with children and families.

We suggest that support for the customization of multiple web-based implementation support systems for EBIs and promising prevention programs can be accomplished through various configurations of public-private partnerships between EBI purveyors, public systems at the federal and state levels, researchers, and philanthropy. Development and testing of these platforms and tools have the potential to build infrastructure to address several of the MAPS IV scale-up factors, including EBI readiness for scale-up, a skilled workforce, and data monitoring and evaluation capacity. We encourage development of “at a glance” visually engaging outward-facing data dashboards that can be generated by these data systems as a pathway to increased community engagement and public support for EBIs.

The Quest for EBI Scale-Up and Improvements in Population-Level Well-Being

The MAPS IV Task Force authors cited Patrick McCarthy, former President of the Annie E. Casey Foundation, who noted that “a bad system will trump a good program every time” (2015, p. 13). We have seen this play out time and time again in multiple public systems. The MAPS IV authors considered lessons learned from more than a decade of public system efforts to implement EBIs. The results provide a closer look at the “bad system.” The MAPS IV Task Force highlighted forces both within and external to public systems that can create an inhospitable ecosystem for scaling-up EBIs. These forces are bigger than any one public system. The common set of factors identified by the MAPS IV Task Force that facilitate EBI scale-up and the recommended action steps provide insights into how to move the field forward. These common elements and actions have the potential to build out the infrastructure scaffolding required to support the sustainable scale-up of EBIs across public systems if prevention and implementation scientists embrace an unwavering commitment to community-practice-research partnerships that investigate real-world implementation and the practical usability of tools that contribute to EBI scale-up in public systems.

We conclude with four extension recommendations in support of ongoing efforts to successfully scale EBIs in public systems and improve behavioral health for youth and families at a population level. First, we recommend a commitment to equitable implementation as a pathway to successful scale-up. Equitable implementation includes “explicit attention to the culture, history, values and needs of the community... integrated into the principles and tools of... effective programs for a specific community or group of communities” (DuMont et al. 2019). Equitable implementation partnerships between community stakeholders and residents, policy makers, practitioners, and scientists are integral to promoting public support for EBIs and sustainability of EBIs within public systems. The MAPS IV authors recommended the development and evaluation of new frameworks that foster EBI scale-up (Fagan et al. 2019). Although we are hesitant to endorse development of even more implementation frameworks than already exist, we simply include a request that any new frameworks for EBI scale-up include capacity drivers relevant to the five equitable implementation actions recommended by a group of researchers, practitioners, and funders at Academy Health’s recent conference (DuMont et al. 2019). The same applies to reconsidering existing implementation frameworks relative to EBI scale-up.

As a part of this recommendation, we also encourage building evidence for community- or culturally grounded promising programs as pathways to health equity. An important part of this process will likely include creation of youth- and/or community-guided design of well-being and engagement measures that are relevant to the lived experience of people of color and indigenous communities. We have been measuring the wrong outcomes for too long. When success of a program is measured only by what the system does to children, youth, and families, we miss important mechanisms for change that can persist over time, circumstances, and even multiple EBIs.

Second, the ecological model presented by the MAPS IV authors represents EBI Data Monitoring and Evaluation Capacity as a separate factor on a par with the other five factors at the mezzo level. We recommend broadening this conceptualization beyond EBI-specific data. Data collection and analysis would be instrumental for supporting and furthering each of the mezzo factors. For example, a needs assessment process at the community and system levels would facilitate selection of proven-effective EBIs and practices that align with the needs of youth and families in the community. In addition, collecting and reporting data regarding the impact of broader public system initiatives on youth and family well-being has the potential to increase public awareness and support for broader uptake of EBIs. Such evidence could be leveraged to support efforts to increase statutory endorsement and funding for EBIs on a broad scale in public systems.

Third, we ask the prevention science community to consider the value of developing innovative decision-support tools to guide case planning and EBI-referrals to community-based providers by public systems. This would be particularly helpful if the decision-support tools could be used in multiple public systems. To be clear, we are not suggesting a new risk or safety assessment; rather, we suggest a strengths-based assessment approach guided by youth voice of lived experience to inform case planning and frontline worker EBI-referrals. Perhaps there is merit to adapting or building-out from the ecological approach of the Youth Experience Survey or other similar population-level surveys to generate an individual snapshot of underlying risk and protective factors experienced by a youth in school, with peers, with family, and in the community. When reported from the perspective of youth, these underlying factors can serve as guideposts to inform case planning and EBI-referral. Such decision-support tools may have the potential to move public systems beyond a pathology-only approach toward more healing-centered engagement of youth (Ginwright 2018). We further suggest that innovative tools and technologies have the potential to facilitate administration and real-time availability of assessment results to inform timely case planning. For example, online and app-based animations and voiced technologies can be tailored to developmentally appropriate versions of the strengths-based assessment and also accommodate various levels of literacy and multiple languages.

Finally, we would like to encourage EBI developers and purveyors to operationalize fidelity using constructs actionable in real-time, real-world practice contexts. Too often these measures are difficult to quantify and result in poorly articulated results. We say this, respectively, as a foundation program officer who supports the customization of a variety of these platforms and a developer with experience constructing and customizing a platform for multiple EBIs. We have found the process of operationalizing fidelity measurement for a software platform often provides opportunities to consider the use of meaningful data not previously considered obtainable by EBI developers. This can move the field beyond dosage counts and practitioner self-report to operationalize actionable fidelity measures. For example, Strong African American Families (SAAF) recently completed customization of 3C Institute’s IMPACT implementation support platform with funding provided by the Casey Foundation. When SAAF developers realized that youth- and caregiver-centered evidence could more easily be captured through the IMPACT platform, they added youth and caregiver engagement measures to triangulate the evidence for session fidelity. We also encourage the use of EBI-specific apps to maximize participant access to and response rates for these measures. Valuing participant-centered evidence has the potential to enrich the feedback loop between implementation practice and implementation research as we consider pathways to effective implementation and scale-up. We also note that emerging speech recognition and machine learning technology developed by Lyssn (https://lyssn.io) is currently being used for automated fidelity feedback with Motivational Interviewing (Atkins et al. 2014; Tanana et al. 2016) in several sites nationally. This technology holds great promise for enriching our understanding of fidelity and supporting timely sustainable responses to improving fidelity through precision coaching.

In conclusion, we applaud the leaders of the MAPS IV Task Force for conducting a thorough review of the literature and prioritizing factors that facilitate or impede the quality implementation and scale-up of EBIs within or across public systems. We encourage the broader group of prevention scientists, including EBI developers, purveyors, and intermediaries, to reflect on current readiness gaps in their dissemination strategies for scale-up and develop innovative tools to bridge those gaps. Similarly, funders might further consider how they can support the development and testing of such innovations. It is only through partnerships committed to a shared vision for behavioral health equity in the USA that we will begin to move the needle when it comes to population-level well-being for children, youth, and families.

Footnotes

  1. 1.

    We note that “drift” is not the same as adaptation, particularly when the latter is based on community or cultural values. Adaptation is intentional, while “drift” occurs from lack of attention to real-time data that informs continuous quality improvement by EBI developers, purveyors, or intermediaries to ensure consistent and competent EBI delivery.

Notes

Funding Information

No external funding supported this work. The Annie E. Casey Foundation supports the customization of multiple web-based implementation platforms for programs because of our long commitment to scaling-up programs and practices that improve population-level well-being for children, youth and families. The Foundation does not endorse any one product.

Compliance and Ethical Standards

Disclaimer

The views and opinions expressed in this report are those of the authors and should not be construed to represent the views of the Annie E. Casey Foundation or any of the sponsoring organizations, agencies, or the U.S. government.

Conflict of Interest

The authors report no conflict of interests. Dr. DeRosier is CEO of 3C Institute which developed and disseminates the IMPACT implementation support platform.

Ethical Approval

This article does not contain any studies with human participants performed by any of the authors.

Informed Consent

Because this article is a commentary, informed consent is not applicable.

References

  1. Aarons, G. A., Sklar, M., Mustanshki, B., Benbow, N., & Brown, C. H. (2017). “Scaling-out” evidence-based interventions to new populations or new health care delivery systems. Implementation Science, 12, 111 https://implementationscience.biomedcentral.com/articles/10.1186/s13012-017-0640-6.CrossRefGoogle Scholar
  2. Atkins, D. C., Steyvers, M., Imel, Z. E., & Smyth, P. (2014). Scaling up the evaluation of psychotherapy: evaluating motivational interviewing fidelity via statistical text classification. Implementation Science, 9, 49.CrossRefGoogle Scholar
  3. Barlow, A., Mullany, B., Neault, N., et al. (2013). Effect of a paraprofessional home-visiting intervention on American Indian teen mothers’ and infants’ behavioral risks: A randomized controlled trial. American Journal of Psychiatry, 170, 83–93.CrossRefGoogle Scholar
  4. Barlow, A., Mullany, B., Neault, N., et al. (2015). Paraprofessional delivered home-visiting intervention for American Indian teen mothers and children: three-year outcomes from a randomized controlled trial. American Journal of Psychiatry, 172, 154–162.CrossRefGoogle Scholar
  5. Bartlett, J. D., & Rushovich, B. (2018). Implementation of trauma systems therapy-foster care in child welfare. Children and Youth Services Review, 91, 30–38.  https://doi.org/10.1016/j.childyouth.2018.05.021.CrossRefGoogle Scholar
  6. Beitsch, L. M., Brooks, R. G., Grigg, M., & Menachemi, N. (2006). Structure and functions of state public health agencies. American Journal of Public Health, 96, 167–172 Submitted to Annie E. Casey Foundation’s Evidence2Success project team.CrossRefGoogle Scholar
  7. Belenky, D. M., & Nokes-Malach, T. J. (2012). Motivation and transfer: the role of mastery-approach goals in preparation for future learning. Journal of the Learning Sciences, 21, 399–432.CrossRefGoogle Scholar
  8. Brown, E. C., Hawkins, J. D., Arthur, M. W., Briney, J. S., & Fagan, A. A. (2011). Prevention service system transformation using Communities that Care. Journal of Community Psychology, 39, 183–201.  https://doi.org/10.1002/jcop.20426.CrossRefPubMedPubMedCentralGoogle Scholar
  9. Catalano, R. F., Fagan, A. A., Gavin, L. E., Greenberg, M. T., Irwin, J., Charles, E., Ross, D. A., & Shek, D. T. L. (2012). Worldwide application of prevention science in adolescent health. Lancet, 379, 1653–1664.CrossRefGoogle Scholar
  10. Chilenski, S.M., Gayles, J.G., Luneke, A., Schmidt, D., Penilla, M.L., Lin, S. & Lew, D. (2019a). Evidence2Success: 2018 cross-site project progress report. Submitted to Annie E. Casey Foundation’s Evidence2Success project team.Google Scholar
  11. Chilenski, S. M., Frank, J., Summers, N., & Lew, D. (2019b). Public health benefits 16 years after a statewide policy change: Communities That Care in Pennsylvania. Prevention Science, 20, 947–958.  https://doi.org/10.1007/s11121-019-01028-y.CrossRefPubMedGoogle Scholar
  12. DeRosier, M. E. (2019a). Getting the results we want—how to choose the right data supports within a complexity-informed approach. Master Class presented at the Global Implementation Conference, Glasgow, Scotland, September.Google Scholar
  13. DeRosier, M. E. (2019b). Three critical elements for real-time monitoring of implementation and adaptation of prevention programs. Journal of Primary Prevention, 40, 129–135.CrossRefGoogle Scholar
  14. DeRosier, M. E., McMillen, J., Ornstein-Davis, N., Kameny, R., & Hoffend, C. (2011). Tools to support career advancement of diverse social, behavioral, and mental health researchers: comparison of in-person and online training delivery modes. Journal of Online Learning and Teaching, 7, 43–56.Google Scholar
  15. Dimeff, L. A., Koerner, K., Woodcock, E. A., Beadnell, B., Brown, M. Z., Skutch, J. M., Paves, A. P., Bazinet, A., & Harned, M. S. (2009). Which training method works best? A randomized controlled trial comparing three methods of training clinicians in dialectical behavior therapy skills. Behaviour Research and Therapy, 47, 921–930.CrossRefGoogle Scholar
  16. DuMont, K., Metz, A. & Woo, B. (2019). Five recommendations for how implementation science can better advance equity [Blog post]. Retrieved from academyhealth.org/blog/2019-04/five-recommendations-how-implementation-science-can-better-advance-equity.
  17. Fagan, A. A., Bumbarger, B. K., Barth, R. P., Bradshaw, C. P., Cooper, B. R., Supplee, L. H., & Walker, D. K. (2019). Scaling up evidence-based interventions in US public systems to prevent behavioral health problems: challenges and opportunities. Prevention Science.Google Scholar
  18. Fixsen, D., Blasé, K., Metz, A., & Van Dyke, M. (2013). Statewide implementation of evidence-based programs. Exceptional Children, 79, 213–239.Google Scholar
  19. Ginwright, S. (2018). The future of healing: shifting from trauma-informed care to healing centered engagement. Occasional Paper #25. Retrieved from http://kinshipcarersvictoria.org/wp-content/uploads/2018/08/OP-Ginwright-S-2018-Future-of-healing-care.pdf.
  20. Haroz, E. E., Ingalls, A., Kee, C., Goklish, N., Neault, N., Begay, M., & Barlow, A. (2019). Informing precision home-visiting: Identifying meaningful subgroups of families who benefit most from Family Spirit. Prevention Science.  https://doi.org/10.1007/s11121-019-01039-9.
  21. Hawkins, J. D., Jenson, J. M., Catalano, R. F., Fraser, M. W., Botvin, G. J., Shapiro, V., & Stone, S. (2015). Unleashing the power of prevention. Discussion paper. Washington, D.C.: Institute of Medicine and National Research Council.Google Scholar
  22. Hawkins, J. D., Oesterle, S., Brown, E. C., Monahan, K. C., Abbott, R. D., Arthur, M. W., & Catalano, R. F. (2012). Sustained decreases in risk exposure and youth problem behaviors after installation of the Communities That Care prevention system in a randomized trial. Archives of Pediatrics and Adolescent Medicine, 166, 141–148.CrossRefGoogle Scholar
  23. McCarthy, P. (2015). The road to scale runs through public systems. Stanford Social Innovation Review, 12, 12–13.Google Scholar
  24. McHugh, R. K., & Barlow, D. H. (2010). The dissemination and implementation of evidence-based psychological treatments: a review of current efforts. American Psychologist, 65, 73–84.CrossRefGoogle Scholar
  25. Mullany, B., Barlow, A., Neault, N., et al. (2012). The FS trial for American Indian teen mothers and their children: CBPR rationale, design, methods and baseline characteristics. Prevention Science, 13, 504–518.CrossRefGoogle Scholar
  26. Neuhoff, A., Loomis, E., Ahmed, F. (2017). What’s Standing in the Way of the Spread of Evidence-based Programs?, The Bridgespan Group.Google Scholar
  27. Redd, Z., Malm, K., Moore, K., Murphy, K., Murphy, K., & Beltz, M. (2017). KVC's Bridging the Way Home: An innovative approach to the application of Trauma Systems Therapy in child welfare. Children and Youth Services Review, 76, 170–180.  https://doi.org/10.1016/j.childyouth.2017.02.013.CrossRefGoogle Scholar
  28. Spoth, R. L., Trudeau, L. S., Redmond, C., Shin, C., Greenberg, M. T., Feinberg, M. E., & Hyun, G.-H. (2015). PROSPER partnership delivery system: Effects on adolescent conduct problem behavior outcomes through 6.5 years past baseline. Journal of Adolescence, 45, 44–55.CrossRefGoogle Scholar
  29. Stein, B. D., Celedonia, K. L., Swartz, H. A., DeRosier, M. E., Sobero, M., Brindley, R. A., Burns, R. M., Dick, A. W., & Frank, E. (2015). Can community clinicians learn an evidence-based psychotherapy on the web? Pilot study of an e-learning training and implementation intervention. Psychiatric Services, 998–991.Google Scholar
  30. Sexton, T. L., Patterson, T., & Datchi, C. D. (2012). Technological innovations of systematic measurement and clinical feedback: a virtual leap into the future of couple and family psychology. Couple and Family Psychology: Research & Practice, 1, 285–293.CrossRefGoogle Scholar
  31. Tanana, M., Hallgren, K. A., Imel, Z. E., Atkins, D. C., & Srikumar, V. (2016). A comparison of natural language processing methods for automated coding of motivational interviewing. Journal of Substance Abuse, 65, 43–50.CrossRefGoogle Scholar
  32. Welsh, J. A., Chilenski, S. M., Johnson, L., Greenberg, M. T., & Spoth, R. L. (2016). Pathways to sustainability: 8-year follow-up from the PROSPER Project. Journal of Primary Prevention, 37, 263–286.  https://doi.org/10.1007/s10935-016-0421-z.CrossRefPubMedGoogle Scholar

Copyright information

© Society for Prevention Research 2019

Authors and Affiliations

  1. 1.The Annie E. Casey FoundationBaltimoreUSA
  2. 2.3C InstituteCaryUSA

Personalised recommendations