Scaling-up evidence-based interventions in US public systems to prevent behavioral health problems: Challenges and opportunities (Fagan et al. 2019) addressed an important public health question—how can public systems widely adopt and implement evidence-based interventions (EBIs) to produce population-level impacts on mental, emotional, and behavioral health outcomes? The Mapping Advances in Prevention Science IV (MAPS IV) workgroup and subsequently the authors tackled this issue within five public systems: behavioral health, child welfare, education, juvenile justice, and public health. System-specific workgroups were formed to understand the potential for scale-up of EBIs within each distinct system. This process elucidated variations in capacity for scale-up within the public systems and led to the identification of overlapping issues that can impede or facilitate scale-up—providing an opportunity to understand common factors across the systems.

The authors’ use of an ecological model to organize the relationships between the systems and their common factors provides a very helpful framework for considering research advances, the need for intervention development, prevention services research, and the development of other strategies to understand and address factors linked to uptake of EBIs. For example, they address the need for strategies to build public awareness about EBIs and buy-in from the public and those working in the public systems for scaling-up EBIs. In addition to building public awareness and buy-in, the authors identify the following factors needed to support scale-up of EBIs, including having a skilled EBI workforce, EBI data monitoring and evaluation capacity, public system leadership and support for EBIs, community engagement and capacity, developer and funder capacity to scale EBIs, and having statutory endorsement and funding for EBIs. This last area is identified as a cross-cutting factor that influences each of the other factors.

Major strengths of the article include the level of detail provided for each common factor as well as summarizing and compiling the information on the structure of the five public systems and the primary outcomes targeted by each of those systems in one place for the field of prevention science. Additionally, their discussion of how funding influences adoption of EBIs through the provision of resources needed for scale-up provides prevention scientists with a roadmap of how current funding sources can be utilized to promote and potentially sustain the use of EBIs in public systems. Also, the authors outline a set of recommendations for scaling-up EBIs in US public systems focused on public policy and funding, research and evaluation, and community support and partnerships. It is here where we would like to offer a few additional points for consideration focused on issues of sustainability, ways of thinking about knowledge creation, and possibilities for using systems science/modeling to address scale-up in public systems and to address health disparities.

Sustainability

The authors do an excellent job of (1) making the case for why scaling up of EBIs can have a dramatic and positive impact on public health, (2) describing the factors that influence the feasibility of scaling-up EBIs, and (3) suggesting that the five public systems they identify are appropriate contexts or infrastructures in which to scale-up EBIs to have population-level impact. Although it was obviously not the focus of their paper, it is important to consider, in addition to factors that influence the feasibility of scaling-up EBIs, the factors that influence the sustainability of EBIs within systems. The authors mention sustainability in their “call for increased efforts to provide access to high-quality, sustained delivery (emphasis ours) of EBIs at the level necessary to produce sustained, population-wide improvements in public health and well-being” (emphasis theirs; Fagan et al. 2019). Clearly, the authors recognize and acknowledge the need not only for the ability to scale-up and implement an EBI in the first place, but to maintain that intervention and its effects over time within a particular system. We appreciate this nod to sustainability and offer a few thoughts to consider in thinking about ways to improve and ensure sustainability of EBIs.

As we push ourselves to consider the factors or more commonly referred to implementation context (e.g., funding sources) that may improve the sustainability of an EBI within a system, it is important to distinguish what we mean when we talk about sustainability of EBIs. Factors that influence the sustainability of the intervention itself are related to factors that increase the likelihood that the intervention will continue to be implemented and perhaps become institutionalized within the public system. These factors might include things like funding, the ease of incorporating the intervention into the existing infrastructure of the system, continued leadership support, continued public support, user or participant experience with the system, and the degree to which the intervention becomes embedded within the core operations of the system. Another factor to consider is whether a previously used program or intervention was eliminated to “make room” for the implementation of the new EBI within the system, for the benefit of the community. If so, what successes or achievements of the new EBI are necessary to document and demonstrate—to maintain its implementation over time, as appropriate?

In addition to considering factors important to the sustainability of the intervention itself, it is also important to think about factors that influence the sustainability of impact of the intervention. EBIs are evidence-based because they have been shown through rigorous research and evaluation to have a positive impact on health. However, when EBIs are scaled up, attenuation of their effects is highly probable (Welsh et al. 2010) when little to no attention is given to factors that sustain the effect. Continuous evaluation, improvement, and implementation research are needed to ensure that the scaled-up version of that EBI in the system in which it is implemented continues to show impact on the intended outcomes. Factors that might influence the sustainability of impact might include such things as whether adaptations were made to the necessary or required components, content, or delivery method of the intervention; whether the population experiencing the intervention is similar to the population with whom the intervention was originally tested (Glasgow et al. 1999); and, perhaps, whether previous success of the intervention changes or diminishes the ability for further sustained impact of the intervention (Scheirer and Dearing 2011; Glasgow et al. 2012). Certainly, we might all hope for an intervention powerful enough to eventually eliminate the health problem it is targeted to address. Continuing to monitor and evaluate the impacts of a scaled-up EBI are critical to being able to monitor the sustainability of the impact of the intervention.

Clearly, Miller and Shinn’s (2005) critique over a decade ago of the prevention science model for scaling-up EBIs is still relevant today. The first threat to scaling-up EBIs was failing to consider the community or organization’s capacity to implement the intervention. While the field has done more to consider the community, organization, and system’s capacity to implement EBIs, as noted by the authors, the incongruence in values, pro-innovation bias, and simplistic view of adopting EBIs are continued threats to sustainability. As described by Miller and Shinn (2005), an EBI could be incongruent with values that shape the culture, beliefs, and practices of systems (e.g., an EBI that is focused on health equity in a system that does not view health equity as their mission). Some research exists to support EBIs as more beneficial than the existing, routine practices in a system (Lipsey and Landenberger 2006). However, the assumption that EBIs are always more valuable and/or impactful than existing practices might inadvertently undervalue the expertise and experience of system stakeholders, creating a barrier to sustainability. Finally, a simplistic understanding of how systems adopt EBIs and the assumption that they will do so without adaptations is unrealistic. The implementation context matters and it might be more realistic to expect systems to adopt the core elements of an EBI than to adopt an EBI “whole cloth” (Fixsen et al. 2009). Successful scaling up of EBIs is likely conditioned on messaging that communicates alignment between the system’s values and the new EBI, promotes the importance of evaluating the relative advantage of the EBI to existing practice, and supports ongoing evaluation to contribute to the knowledge of core elements of an EBI across systems.

Knowledge Creation

Whether continuing evaluation is intended to monitor the sustainability of the impact, compare an EBI to existing practices, identify the core elements of an EBI, or document implementation best practices, ongoing knowledge creation is essential for closing the research to practice loop and sustaining the impact of EBIs. For example, the Centers for Disease Control and Prevention’s (CDC) Workgroup on Translation (Wilson et al. 2011) developed the Knowledge to Action Framework resources and tools to support the public health community’s understanding and use of the Framework and to facilitate widespread adoption of science-based programs, policies, and practices.Footnote 1 The Framework and the Planning Tool (CDC 2014) respectively model and describe the pathways for practice-based evidence to feed back into the production of knowledge through efficacy, effectiveness, implementation, or hybrid studies. For example, in CDC’s intimate partner violence prevention (IPV) program, DELTA FOCUS (Armstead et al. 2017), CDC implemented a data-to-action process to inform support to funding recipients and required the recipients to implement similar processes to build practice-based evidence (Armstead et al. 2018). A data-to-action process involves documenting the use of performance monitoring and program evaluation data to make ongoing data-informed improvements to program implementation. Recipients used this process to inform programs and policies with wide-scale applicability, including those focused on social determinants of health (Estefan et al. 2019). For example, one coalition created IPV and teen dating violence-related fact sheets that informed the creation of policy recommendations on inclusion and support for LGBTQ students in K-12 that addressed multiple educational risk and protective factors. Another coalition created and disseminated messaging on evidence-based IPV prevention that led to the governor of the state including funds in the budget for community-based IPV prevention specifically addressing social determinants of health.

Currently, CDC, in partnership with the funding recipients and the National Resource Center on Domestic Violence, is releasing a series of storiesFootnote 2 describing what worked and did not work in implementation of IPV prevention approaches, some of which were EBIs. The stories are designed to inform implementation through practice-based discoveries that help close the research to practice loop. Federal agencies and other funders could similarly close the research to practice loop by requiring recipients to institute a data-to-action process through which they would use ongoing data collection procedures to monitor implementation and evaluation and make data-informed improvements as needed. Ongoing data collection to evaluate interventions generates practice-based lessons learned about of important facilitators and barriers to implementation, which helps to sustain the impact of EBIs.

Whereas the co-created knowledge produced by these ongoing evaluation loops between research and practice has the potential to improve the translation of research to practice, it also has the potential to improve the quality of the research. This result is quite evident in the successful design, implementation, and interpretation of the American Indian/Alaska Native (AI/AN) Family and Child Experiences Survey (FACES) Study, the first nationally representative study of Region XI Head Start programs run by tribal communities (Barofsky et al. 2018). A collaborative workgroup that included Region XI Head Start Directors, Tribal Early Childhood Researchers and Administration for Children and Families (ACF) federal staff, and the contracted research organization designed the study with goals that were jointly identified. For example, a priority for AI/AN communities was the creation of a measure to understand children’s participation in community and cultural activities such as listening to elders tell stories; participation in traditional ways, such as carving, harvesting, collecting, hunting, and fishing; dancing, singing, or drumming; working on traditional arts and crafts; participation in traditional ceremonies; and playing American Indian or Alaska Native games (Barofsky et al. 2018), which the workgroup developed. This collaborative approach contributed to high rates of family participation in the study (e.g., 81% of parents completed the parent survey; 95% of children were directly assessed) (Bernstein et al. 2018), which, in turn, provided quality information on the psychometric properties of child outcome measures with AI/AN children (Malone et al. 2018). Moreover, when prevention scientists recognize the shared ownership of knowledge creation with practice and community partners, these partners report becoming increasingly more adept at using and consuming research. Such a process recognizes the expertise and contributions of partners in ways that reflect reality and directly addresses some of the potential ethical challenges that arise when scaling-up evidence-based prevention practices and programs.

Systems Science/Modeling Approaches

In addition to the numerous valuable research recommendations made by the authors, an additional one to consider is whether modeling the impact of EBIs scaled up in these public systems would produce outcomes that address the factors that foster scale-up and ultimately impact behavioral health outcomes targeted by the systems. Systems science involves analytic modeling of complex interconnections to understand the potential impact of introducing change agents or other parameters on a system, problem, or outcome. There are multiple analytic strategies for conducting systems science research depending on the problem under study, such as system dynamics modeling, agent-based modeling, and network analysis (Mabry and Kaplan 2013). The National Institutes of Health (NIH), CDC, and other federal agencies have supported systems science research and the use of modeling approaches to address a broad range of public health problems. The use of systems science methods to advance questions of dissemination and implementation of policies, programs, and practices also has been recommended (Burke et al. 2015; Northridge and Metcalf 2016).

Application of these methods to address scalability of EBIs may help to further specify the factors identified in this paper as important for scale-up in the five public systems and address the complexity across the multiple levels of the proposed scale-up model. Intervention and implementation theories are available to guide the development of the systems science models as well as data from studies of risk and protective factors and studies of scaling-up EBIs at the community level—for example, the authors discuss Communities that Care (Hawkins et al. 2008) and PROSPER (Spoth et al. 2004). Systems science and modeling approaches utilize stakeholder input in model specification as well as interpretation. For example, the Prevention Impacts Simulation Model (PRISM), developed by CDC, was designed to address chronic health conditions (e.g., cardiovascular disease, diabetes, and smoking) from a community perspective to estimate what it would take to implement evidence-based strategies, taking into account local contextual issues and policies related to funding health services (Burke et al. 2015). A simulation of long-term impacts of implementing clinical and community interventions in communities receiving Community Transformation Grants (CTGs) found that, assuming there was sustained funding of CTGs, up to 109,000 premature deaths could be averted and up to $8.1 billion in discounted medical costs could be produced over a 25-year period (Yarnoff et al. 2019). Modeling approaches also can address questions about how decisions might affect key populations, such as health disparity populations (McNulty et al. 2019). For example, use of PRISM in the Mississippi Delta identified that increasing access to health care among disadvantaged populations could have an unintended negative effect on the quality of care due to the increased demand (Burke et al. 2015), which could exceed the availability of providers. In this simulation, increasing access revealed that increasing capacity to deliver care would be necessary to address this community need. Thus, application of systems science/modeling methods may help to produce the information needed to inform policies and statutes for EBIs in the five public systems, engage stakeholders, and address the additional factors needed for scale-up within and across the public systems.

Considering Social Determinants of Health

We applaud our prevention science colleagues for thinking strategically across public systems towards the goal of scaling-up EBIs to impact health and well-being. This type of inquiry and the understanding it generates can inform research, practice and policy—moving away from silos and discrete programs to integrated approaches that could increase impact on mental, emotional, and behavioral health outcomes. One last consideration is the role of social determinants of health in the implementation and sustainment of EBIs in public systems. Success with implementation and sustainability of EBIs at scale in public systems has potential for increasing health equity and decreasing health disparities. For example, focusing on outcomes relevant across the five public systems examined by the authors rather than specific programs is consistent with Lewin’s original action research model for addressing social problems (Lewin 1946), and would encourage knowledge building about the type of flexibility and adaptation needed for prevention to have impacts across a range of contexts and over time, as society changes. We are hopeful about future discussion and collaboration between the MAPS IV Task Force and the Society for Prevention Research Disparities-Equity Task Force and hope that such a collaboration will move prevention science to an increased focus on social determinants of health that is relevant across public systems.