Leadbeater and colleagues (2018) provide a vital service to prevention science in reporting on a methodical effort to identify and describe understanding of several integrity challenges that may arise in implementation of prevention of evidence-based programs (EBPs). The paper outlines key conflicts, emphasizes related potential ethical dilemmas, suggests important considerations, and in some cases, provides action guidance. The authors note that “this article serves as an educational resource for students, new and experienced investigators and Human Subjects Review Board members…” (p. 7a). As a thoughtful and carefully developed rendering about prominent challenges encountered as prevention scientists work on implementation, the paper locates those challenges within a professional responsibility framework. In particular, the authors emphasize how this type of work may have different requirements and uncertainties than descriptive and efficacy experimental prevention studies. The article is cast primarily from the perspective of encounters that may arise when an intervention developer begins working to spread the use of his/her developed program, with additional consideration of conflicts that might arise for those who might be engaged to consult on use of EBPs.

Leadbeater and colleagues provide a good first iteration for prevention science about important considerations in implementation, touching on moral, methodological, and sound practice challenges. Akin to a typical process, in factor analytic scaling, this first step is based on surveying the domain for key examples and abstracting some likely dimensions. As often occurs in first iteration factor scaling, what emerges here seems to best serve to suggest the eventual shape and content organization for ethical guidelines, acting as a valuable direction for revisions that will eventuate into the full “best fit” rendering. My comments here are offered in hopes of helping to advance this started conversation toward further iterations and the ultimate goal of the work group. Specifically, I highlight three considerations: (1) critical contemplation of the purpose of formulating ethical guidelines and attention to how they are derived, including the extent of need for such guidelines specific to prevention (and implementation within prevention); (2) differentiating ethical dilemmas from professional practice choices and what may be methodological and technical issues; and (3) some emerging or coinciding issues related to implementation (and perhaps prevention science in general) that might be valuable to incorporate into further discussions.

Need for Specific Ethical Guidelines for Prevention

As noted in the paper, this effort grew out of the success of prevention science. Over the past three decades, we have moved from almost no evidence-based programming to over a hundred documented empirically supported programs that address a variety of problems and include some applications for a broad range of populations (National Research Council and Institute of Medicine 2009). As part of that progress, several developers of programs advanced from excellent efficacy trials to systematic translation for implementation in communities, including growing interest by others in utilizing EBPs. In the past two decades, coordinated efforts to advance from demonstrated efficacy to implementation with fidelity and to scale have been launched by several groups (e.g., Blueprints for Healthy Development 2017) and research into testing systems of implementation have been offered (e.g., Communities That Care, Hawkins et al. 2009) provide some guidance. Contributing to this move to implementation has been the growing acceptance of the importance of methodological standards for judging a program as having empirical evidence, even if consensus about those standards still eludes us (Tolan 2014). Because of these advances, prevention programs are now being implemented at a scale that brings new and complex roles and responsibilities, which include potential ethical issues for those engaged in such implementation efforts.

Thus, not surprisingly, the article is organized around challenges encountered by scientists in implementation specifically, rather than prevention science in general. That focus is explained in part by the assertion that ethical issues arising in descriptive and efficacy prevention research will be addressed by Institutional Review Board standards and the ethical guidelines of various professional organizations to which prevention scientists belong. Perhaps, though, there is need to delve more into the assertion that guidance is already available to fully address descriptive and efficacy studies and similarly to consider whether the issues presented as primary examples of implementation ethical issues by Leadbeater et al. (2018) are specific to that type of work. For example, the dilemmas that arise regarding use of public data sets and linkage to data obtained with informed consent that did not extend explicitly to those linkages, extend to all kinds of research beyond implementation, and probably well beyond prevention science.

This focus and the basis for such a specific focus also raises, at least for me, the need to critically consider the purpose of ethical guidelines and how they are best constructed; should they arise from specific problems and this particular area of prevention science or, as is more typical, emanate from core general values of a profession about public responsibility that express the primary values of the organization. However, Leadbeater et al. (2018, p. 10) explain their focus arising from “Core ethical principles are interdependent and transcend disciplines.” Focusing on the second part of that contention, it seems that it may well not be correct. Ethical principles depend on the basic purpose and values of the organization. For example, the American Medical Association states “As a member of this profession, a physician must recognize responsibility to patients first and foremost, as well as to society, to other health professionals, and to self.” Nine principles follow from this core value and become the guidance for determining when an ethical dilemma arises and what are the appropriate actions for a physician when facing that dilemma (Riddick 2003, p. 9). Similarly, the American Public Health Association (2002) Code of Ethics begins with 12 principles that guide ethical public health practice and are meant to inform decisions about specific ethical conflicts. Our core principles and public responsibilities probably differ from other professional groups in some important ways.

In reviewing the collection of examples provided here and the generalizations offered throughout, I certainly saw indicators of a specific set of principles that guide prevention science, many that overlap with those arising from adjacent/overlapping organizations and professions. However, the underlying coherence that tied specific examples together and back to basic principles seemed still to be articulated. And in that lack of specification, a tendency toward situational ethics (e.g., dependent on and primarily applicable to the specific example) and uncertainty about what principles should prevail could be found. Perhaps, not as a fault of this work, but in appreciation of this first iteration, there is need to step back from these examples and this specific focus on implementation to contemplate and formulate principles that express the core mission of our society and the profession it represents. As stated in 2016 Society for Prevention Research mission statement (https://www.preventionresearch.org/about-spr/mission-statement/):

The Society for Prevention Research is an organization dedicated to advancing scientific investigation on the etiology and prevention of social, physical and mental health, and academic problems and on the translation of that information to promote health and well-being. The multi-disciplinary membership of SPR is international and includes scientists, practitioners, advocates, administrators, and policy makers who value the conduct and dissemination of prevention science worldwide.

From this statement, it seems that ethical behavior will be that which advances scientific investigation, prevention as a health approach, and translation of empirically proven programs to promote health and well-being. What principles emanate from that intent and purpose? How might those principles affect the cases raised in this report? For example, is there a principle that prevention scientists fully disclose the extent of knowledge about applicability of a given program to a given population? Or might the principle be that one does not offer prevention programming as an EBP unless there is sound research showing benefits for the population under consideration? These would lead to different stances being ethical when asked by a given community for help in implementing prevention programming. Similarly, might there be principles about attention to conflict of interest that arises when the developer of a program is engaged to speak about preferable EBPs? If the principle is one must recuse one’s self if the developer of one of the programs being offered, what is ethical practice is different from what it would be if the principle is that one must simply disclose all potential conflicts of interest and one should be guided by objectivity in consultation, not simply explain why one cannot engage.

Differentiating Ethical Dilemmas from Professional Choices and Operational Challenges

This article identifies several issues, each presented as an ethical dilemma, that may be better understood as (1) a matter of professional choice—what one might want to do depending on one’s interests and preferences for practice—or (2) a methodological or operational challenge that arises in implementation research and may or may not occur otherwise. This distinction is important to not only help guide practice but also to identify what are the best actions in the face of what may be difficult choices or challenges, but do not carry the potential onus of ethical transgression. Professional choices are those dilemmas that are, at the core, not a challenge of public interest versus personal interest of the prevention scientist but about what activities and circumstances a professional may want to require for his/her work. For example, I was asked to consult to a juvenile court on which of several EBPs related to delinquency prevention should be implemented. The administrative judge, well studied in regard to the programs meeting EBP standards, expressed concern about the applicability of the programs to the population of his court, whether the studies comprising the evidence base were focused on youth of the ethnic background, economic hardship level, and community types his court served. The judge raised the option of my colleagues and I being contracted to organize a locally relevant alternative if none of the existing EBPs was a proper fit. While the opportunity for this contract was enticing, the public interest and the primary engagement were to help the court identify the most useful program. As at least two of the programs had been tested with populations like those served by the court, I was ethically bound to recommend those; the conflict was between public interest and personal interest. If the request had been shifted to would I like to develop a program for the court or help them implement an EBP, it would no longer be an ethical dilemma but one of how I preferred to contribute to the court and spend my professional time and effort.

Similarly, the example of whether a prevention scientist provides consultation without payment (Example 2 under Activity 2; Leadbeater et al. 2018) seems to be more a matter of what this scientist would like to do and can personally afford to do than an ethical dilemma. Similarly, Example 3, concerning engagement to reapply for funding for collaboration with a community that modified the EBP but will not agree to evaluation of that modification, seems to fall more within the realm of professional practice decisions than an ethical dilemma. There does not seem to be a clear conflict between personal and public interests, but more a question of what activities one wants to undertake professionally. Perhaps, if the Society for Prevention Research was to articulate a principle that prevention scientists do not engage in collaborations that do not include a sound evaluation (defined by some set standards) and the program under consideration has not already been adequately tested with a given population, then with such a stated professional principle, the example would graduate to an ethical dilemma.

This and a few the other examples Leadbeater et al. (2018) present about linking public data sets to data previously obtained with informed consent may fall more within methodological or operational challenges than essentially ethical dilemmas. As noted in that rendering, there are human subject protection and legal privacy rights that govern much of what can be done, and both are evolving as technical capabilities and legal casework accumulates. More specifically, my reading of both examples is that there may well be challenges of how to meet requirements for de-identification to link data sets (and meet benefit/risk balance guiding consent requirements), that are new and potentially daunting technical challenges but that are not ethically cloudy. The examples do touch on a common issue in prevention studies: that the longitudinal nature can extend the eventual data analyses to timespans and adoption of new methods of analyses that could not be foreseen at the outset of the study. What might be done without re-securing consent and how new linkage capabilities and analytic methods can change the consent/benefit balance are important issues for prevention science (and others), and it certainly deserves much attention. How much that is an ethical rather than an operational capability seems worth differentiating.

Expanding the Implementation Issues (and Models)

A third reflection arising in reviewing this valuable report is the need to expand the focus beyond a model that depends on the linear movement from efficacy to effectiveness to implementation, without major modification, and the common accompanying condition of developer being central in implementation. For example, recent discussions have suggested alternatives to the model promulgation approach, identifying commonalities among proven programs or benchmarks of implementation and program qualities related to larger effects (e.g., Aos et al. 2011; Lipsey 2018; Valentine et al. 2011). It is noteworthy that many of the examples extracted by Leadbeater et al. (2018) center around how EBP implementation should proceed when community users are not able or willing to follow prescribed procedures and to the full extent that characterized the tested program. A second related issue is application where the population of interest is not one clearly part of the prior research. It has been well documented that the penetration of EBPs into prevention practice, particularly being done as designed, is quite limited. This means that many if not most implementation dilemmas are going to arise in circumstances other than a developer advancing his/her program or with clear commitment to full implementation as designed. How those situations are to be navigated, while explored to an admirable degree in Leadbeater et al. (2018), may need deliberations that include more varied avenues to and methods of implementation of prevention. Starting from core values and derived principles can provide ethical guidance for the many varieties of situation, when all the potential variations cannot be captured well by case examples.

While touched on by Leadbeater et al. (2018), there is need for more delving into, and perhaps official positions, by the organization on what is considered evidence for EBPs and how important considerations like generalization, inconsistent findings, meta-analytic results, and replications by those other than developers should be valued in conferring “evidence-based.” While there is more consensus among Society for Prevention Research members than may be found in the larger field of intervention evaluation and implementation, there are still many places of importance disagreement about what is required methodologically for a study to be used as evidence, what are the benchmarks for labeling an intervention as effective and/or ready for full scale implementation, and how patterns across studies ought to inform action (Tolan 2014). Fortunately, there have been careful and sophisticated analyses under the auspices of SPR to suggest standards for evidence that addresses many of the pertinent topics (Gottfredson et al. 2015). If, as the mission statement emphasizes, Society for Prevention Research and prevention science is fundamentally concerned with advancing use of EBPs for public health, it seems that this report should inform the guiding principles for ethical guidelines.

Another suggestion based on reading Leadbeater et al. (2018) for the next stage of work, is to incorporate differentiation of ethical issues as those related to responsibilities that must be fulfilled, versus those that should be followed but are not required, versus those that may be allowed or taken on but are not “should” or “musts” (Thomas et al. 2002). The rendering by Leadbeater et al. (2018) could provide some important input for a larger and more involved discussion about which public responsibilities prevention science has that are requirements, which are admirable and recommended but not absolute requirements, which are allowable, and which are explicitly unethical.

Moving Forward on Ethical Guidelines for Prevention Science

Leadbeater and colleagues (2018) have provided a great service to the field by capturing and organizing a disparate set of challenges that arise for those interested in application of research findings through implementation of EBPs. In doing so, they have tapped a larger need, for prevention science to have guidance on ethical issues that are broader. This very stimulating rendering prompted for me a reminder to go back to basics; to recognize that ethical decisions grow out of basic values and professional ethics grow out of public responsibility values and that from those values guidelines can and should be articulated that help us know what we must do, should do, and/or are free to do. I look forward to the opportunity to participate in this vital conversation and hope these comments might help with the ensuing iterations toward articulation of coherent and useful ethical guidance for prevention science.