FormalPara Key Points

Greater transparency in economic models can help improve and ensure accuracy and relevance in modeling efforts.

Transparency requires effort by modelers and the maintaining of a balance between protection of intellectual property and assessment of model validity and reliability.

The Institute for Clinical and Economic Review has piloted structured mechanisms to allow for model validation efforts while protecting the work product of the modeling groups involved, which can serve as a springboard for future innovations to increase model transparency.

1 Introduction

Transparency in decision modeling remains a topic of rigorous debate among healthcare stakeholders, given tensions between the potential benefits of external access during model development and the need to protect intellectual property and reward research investments [1,2,3]. Recognizing that decision modeling is conducted by various organizations, this article focuses on issues in transparency from the perspective of university-based researchers and academic institutions and recent experience in conducting collaborative research with the Institute for Clinical and Economic Review (ICER), a US-based health technology assessment organization that is actively engaged in model development for multiple audiences in the USA. Importantly, some level of transparency already exists in terms of methods and other technical specifications for published models; ICER public reports also include a technical appendix with details on model structure, parameter estimates, risk equations, and syntheses of clinical data that inform the model. Therefore, increased transparency in decision modeling is taken here to mean mechanisms to allow direct external access to a model’s structure, source code, and data. This is similar to the definition provided by Eddy et al. [4] in an International Society for Pharmacoeconomics and Outcomes Research (ISPOR) good practices report, in which it is stated that transparency “… refers to the extent to which interested parties can review a model’s structure, equations, parameter values, and assumptions.” As such, transparency is separated from the process of model building, although a process of transparency may interact with and itself be a step in the process of building the model. The current debate on transparency has arisen in the USA at a time when the use of decision models is becoming more prominent in healthcare decision making through the emergence of value-based formularies and the efforts of groups such as ICER [5]. Strategies to allow direct external access to models can take on many forms but are bounded between the status quo (e.g., detailed methods reporting) and free, publicly available open-source models. Balancing transparency with practicality and the interests of all involved parties is a daunting task with many ethical, legal, and infrastructure-related hurdles and is the subject of intense ongoing debate [2, 6,7,8]. To establish a suitable level of transparency, there is a need to balance the pursuit of model validity and reliability with protecting intellectual property rights and allowing for rewards for research investments. As with many such situations, a practical approach that meets the goals of the activity and balances the incentives and constraints of the interested parties is the ultimate goal.

A variety of different stakeholders are engaged in this topic, including model developers (both the developers of a specific model and the larger modeling community), model commissioners or funders, model users (i.e., healthcare decision makers, including healthcare payers and clinical guidelines groups), developers of modeling methods and supportive software, pharmaceutical and medical device manufacturers, and patients and other healthcare consumers. Although each group will have its own perspective, there is typically a shared goal of producing timely, accurate, valid, and reliable evidence about the comparative clinical and economic impact of healthcare interventions. The primary audience for most decision models is healthcare payers and, perhaps to a lesser extent, clinical guidelines groups. The process of developing and validating the relevant decision models should ensure that these audiences trust the results of the decision model. We note that transparency in the development process is a separate consideration from the subsequent activities of accessing the model to produce custom results, update model inputs, train future modelers, and repurpose models for a different research question. Although these activities have merit, they are not the primary purpose of developing a de novo decision model to inform decision makers about a specific research question. The ISPOR–SMDM (Society for Medical Decision Making) guidance appears to promote model transparency for the purpose of public validation rather than a broader set of purposes [4]. This encapsulates a central tension in the debate: Opponents of broader release worry about model appropriation or misuse for nefarious purposes as well as a detrimental effect on future funding prospects, whereas proponents argue that limited release undercuts the potential for further innovation and improvement through experimentation by others [1, 3, 9, 10].

The purpose of this paper is to provide an overview of the issues surrounding transparency from the perspective of university-based researchers and academic institutions, informed by recent joint experiences with ICER, and to offer key considerations for the field of health economics and outcomes research in the future. We believe these issues apply regardless of whether the work is being funded by governments, payers, health technology assessment (HTA) bodies, or industry. Our focus is on strategies for providing “direct model access”—in other words, allowing one or more interested party to obtain and review the source code, data, and technical documentation of a model developed by another party.

Acknowledging existing standards for the description of methods and inputs used in academic models, this paper seeks to delineate potential strategies to help move toward increased transparency as well as challenges related to balancing the needs and interests of the academic model developers and other parties with interests in model outputs. Finally, the paper also details an initiative to increase the transparency of models that is being developed to support ICER reviews of new and emerging technologies. In the spirit of the ISPOR–SMDM guidance [4], this initiative has a narrow scope, focusing on model development, with the goal of providing direct access to a working version of the model for the stakeholders most qualified to review the model structure, key assumptions, parameter estimates, and other features.

2 Potential Benefits of Direct Model Access

A key potential benefit of increased model transparency via direct access to the model would be, put simply, higher-quality models. Direct access during the development of a model could facilitate and encourage review by interested parties into the structure, underlying assumptions, and key inputs of the model and facilitate attempts to replicate model findings. This could enable the identification of errors and/or suboptimal choices for sources of information and data collection used to inform model inputs. There would also be the opportunity to assess assumptions and methodological decisions related to how the model is designed and populated. In addition, direct access could help identify and characterize sources and levels of uncertainty in model estimates informing the construction of sensitivity analyses and important scenario analyses to consider. At its best, providing external groups direct access can serve as a thorough, high-quality peer review of the structure, source code, and data used in the model, with the attendant goals of maximized validity, reliability, and credibility as well as achieving the best possible understanding of the current level of robustness and potential for error in using the model.

Beyond the primary goals of model development, some have suggested that open access models could serve to increase efficiency in the modeling community by reducing duplication of work [2, 9]. Currently, several parties may simultaneously be designing models without knowledge of other efforts, leading to redundancy. Future modelers may also benefit from free models, as they would only need to update and repurpose rather than develop a de novo model. However, it is also useful to have more than one modeling effort for a decision problem as this allows for the assessment of structural uncertainty, that is, how modeling assumptions and underlying structural choices can affect the outcomes. If the purpose is for the field in general to arrive at a gold standard for a specific model in a particular therapeutic area, the question becomes whether a single publicly released model should become the starting point or whether the field is better served by striving for some form of convergent validity through the production of multiple models, academic discourse, and frank conversation, which has been the general approach to date. The Mt. Hood Diabetes Challenge Network [20], which brought together experts from around the world to promote an open exchange of ideas on economic simulation modeling in diabetes, is an example of a formalized version of this latter approach. Hence, an important consideration of transparency efforts is to balance gains from competition and diversity of ideas with efficiency and improved oversight into any one model [3, 6].

A further potential benefit of open source models could be improved ability to cite model developers/authors in derivative works [2]. This can serve to increase awareness and to foster collaborative relationships between investigators and other leaders interested in the general modeling process. For academics, this can lead to more citations in published works related to the model or to the treatment area. This form of collaboration, should it manifest, could also lead to diffusion of best practices and educational spill-overs across individuals and groups that, over time, could lead to more rapid and innovative advances in modeling. While outside the scope of this paper, similar potential benefits have been described in relation to sharing algorithms and results from basic science studies and clinical trials [11]. All this said, these public good benefits will not necessarily accrue to the original model developers and thus may require a system that can balance and incentivize the optimal distribution of benefits in line with time, effort, and resources used in the process. Ultimately, valid, well-publicized models will be more relevant to decision makers. Therefore, transparency is essential to a robust and valid approach to model development and is central to the continued growth of organizations actively utilizing decision modeling to improve healthcare decision making.

3 Challenges to Direct Model Access

With the above-mentioned potential benefits to direct model access come several challenges. A fundamental barrier stems from the currently low levels of non-industry funding in the USA for model development, which typically only covers time related to specific project objectives, leaving little room for other activities, even if deemed meritorious or considered a public good. In this context, the potentially unfunded time and effort required to share models with external groups, who often require a detailed technical guide and/or a simplified user interface, may not be viewed as a priority. The amount of extra effort required increases for stakeholders lacking modeling expertise. Even with National Institutes of Health-funded models, which are required to be made public, it is seldom if ever that transparency processes are followed in a decision-relevant timeframe [12], an issue that also persists with publicly funded clinical trial data [13]. Models funded by industry come with a unique set of issues, as contractual arrangements with academic researchers may limit the ability to share the model with external audiences.

Another concern is that allowing direct access can lead to delays, or even bias, created by input on model assumptions and parameters from groups with different sets of incentives. Allowing external groups to access a model during the development process opens the process to external influence. Much of the input received would likely consist of valid critiques and suggestions for improvement. However, groups such as manufacturers of the products in question often prefer modeling techniques and interpretations of data that favor their products and may attempt to steer the model development process toward a set of assumptions or inputs that match their interests. Reviews of the cost-effectiveness literature support such findings [14,15,16]. In the extreme and given the amount of potential profits in play for the developers and manufacturers of the interventions in question, especially when the model results have a potential impact on subsequent approval and pricing of the technology, some may view transparency initiatives as an opportunity to undermine the model development process altogether. This potential for delays and introduction of bias is directly at odds with the goals of providing timely and valid results and increases the resources required to develop models. Finally, a specific challenge when models are developed to support HTA decisions relates to the acceptability of prepublication or otherwise confidential data from manufacturers, which are often redacted in public documents. The presence of these data will naturally limit the ability to replicate or fully interrogate models, even if released publicly.

A general reluctance about calls for free open source models within academia stems from several contextual considerations. Model-building activities in academic settings are considered research, and the standard for reporting research findings is to describe the process sufficiently to allow replication. This allows for confirmation of the research findings when another research group sets about answering the same research question but using their own processes and resources and with sufficient technical details regarding the methodology. Healthcare research dollars are a limited resource, and researchers and their institutions compete for research funding. Therefore, academic researchers and related institutions consider model development to be a research investment that can yield short- and long-term returns in the form of future research funding, collaborations, and publications. Career prospects are directly linked to the ability to attain research funding and produce scholarly works. Therefore, especially as a voluntary effort, there is little enthusiasm for options such as making models completely open to the public if that could decrease funding and/or publication opportunities. For example, other research groups competing for the same pool of research funds could provide a competitive research proposal because less resources will be required if a robust, validated model is freely available. An analogy can be made to laboratory-based research. If a research group spent time and effort developing a microbial strain, giving the strain away for free may allow other groups far greater opportunity to leverage the innovation and potentially outcompete the innovator group for future research dollars. Scientific progress is made through incremental improvements on previous researchers’ works but also through competition and incentives for innovators. The key to producing short- and long-term scientific gains is to find the appropriate balance.

4 Potential Solutions and Way Forward

To alleviate some of the concerns and risks of open access to models, innovative data sharing agreements between research groups can be used to establish acceptable parameters for model sharing. Through contracts and financial incentives, interested parties can be allowed access to the model for understanding and technical review but with restrictions on future use of the model or the intellectual property contained therein. In fact, the ISPOR good practices for outcomes research paper on model transparency and validity suggests the use of formal data use agreements to facilitate model transparency efforts [4]. In addition to improved rules and agreements via contracts, portals for sharing models could be designed better to help foster transparency while protecting against copying valuable aspects of the model.

The most feasible type of model-sharing agreement will depend on multiple factors, such as incentives for scholarly activities, future funding, ownership of the model, and jurisdiction of model development. A key component of the discussion will naturally be how the work involved in model sharing will be funded. As with most aspects of transparency initiatives generally, the possible range exists on a broad spectrum between cost recovery and the typically six-figure contracts to develop models for commercial clients.

Model-sharing agreements can range from relatively simple versions such as confidentiality agreements or creative commons licenses to more advanced licensing or data use agreements. The benefits of confidentiality agreements include relatively quick execution between parties along with a relatively low but still relevant protection of intellectual property. Creative commons licenses provide free open access while also allowing model developers to further specify the use of their model by outside parties [21]. With creative commons licensing, modelers can allow or deny commercial use of their model and specify whether new users can adapt the model for other applications. However, creative commons licenses may be difficult to enforce [17]. More detailed and robust licensing agreements through universities or other entities can allow access under more specific rules that provide further protection and consequences for licensing infractions and the means to support the extra effort required to share models through licensing fees. Each of these arrangements require different levels of effort to design and execute, so their use should be aligned with the model-sharing goals, parties involved, and decision-making context.

5 Real-World Example: Model-Sharing Initiative by the Institute for Clinical and Economic Review

A prominent real-world example of providing direct model access to interested stakeholders is a transparency project launched by ICER in conjunction with the academic collaborators who develop and specify economic models to inform the cost-effectiveness and budget impact evaluations contained in ICER appraisals of both new and established technologies. The initial pilots were associated with migraine [18] and endometriosis [19] reviews that were ongoing at the time. Rather than a focus on broad public release for its own sake, this transparency effort was intended to answer questions about whether the models were fit for purpose to inform the policy decisions of interest for the specific ICER review.

The pilots involved direct contracting between the academic groups developing the models and the manufacturers of the products under review, given that the intellectual property being shared resided with the collaborators. Under these agreements, manufacturers paid a small fee to the relevant academic institution to cover the added costs of preparing the models for review, including the development of user documentation. Manufacturers were also asked to sign confidentiality and/or licensing agreements that prevented copying and/or distributing of the models. Access was time limited and targeted to fall within the 4-week public comment period following ICER’s posting of its draft reports.

Results from the pilots were disparate. The endometriosis pilot involved a single manufacturer whose general engagement during the review was limited, and the invitation to participate in the pilot was declined. The migraine pilot involved three manufacturers and featured an Excel-based model that was released on a Box [22] platform to allow controlled access from any location. All calculations and formulas were available to reviewers. Overall, this pilot was deemed a success. The manufacturers considered the contracting and model release process to be relatively smooth and described communication about the release as clear and consistent. In addition, stakeholders identified minor but relevant errors in the model that could be corrected in time for the final report. Still, there were several logistical challenges worth mentioning. First, company firewalls created problems with access to Box in some cases, requiring individuals to work outside of their preferred and secured information technology environment. In addition, the migraine model was populated in part by data submitted as “in confidence” by the manufacturers, which necessitated data redaction along with a concurrent worry that back calculation of confidential results, while prohibited, was nonetheless potentially feasible. Manufacturers requested customized versions of the model with their own data unredacted, but the nominal fee charged would not have covered the additional effort required for this change. Finally, the manufacturers were interested in more detail in the technical documentation and greater opportunity for interaction with the modelers. However, they also reported that the model structure, estimation, and documentation was reasonably straightforward; the benefit that additional interaction could provide was therefore unclear.

Subsequent reviews have produced similar outcomes. ICER’s review of treatments for hereditary angioedema involved two manufacturers, both of whom declined to participate in a model release. However, the organization developed an internal model of medication-assisted treatments for opioid use disorder, which was shared with three participating manufacturers as a “live” web app on heRo3sm [23], an online modeling tool that works with a cloud-based, open-source health economics modeling package in the programming language R. In this case, authorized users were sent a secure weblink to access the model in the hero3 environment. Users could modify certain parameters to assess changes in model results but could not make any permanent model changes. One manufacturer submitted confidential data and had exclusive access to a separate version of the model populated with the confidential information. As with the migraine pilot, feedback from the manufacturers focused primarily on logistical issues. First and foremost, while the use of R-based modeling is increasing, the participating manufacturers had limited exposure to R, making a technical evaluation and understanding of the available code challenging. Some manufacturers also reported difficulty in identifying certain parameters and reviewing sensitivity analyses, citing a lack of familiarity with the platform. While the hero3 vendor did offer a tutorial session for reviewers, this did not appear to mitigate all concerns.

Moving forward, ICER intends to work with its collaborators to routinely offer the opportunity for model examination to manufacturers of the products under review for every future topic. The hope is that, given the nominal fee and the fact that manufacturers already devote substantial resources to the review of ICER models, this will become a more predictable and consistent exercise. ICER is also willing to extend invitations for model review to patient advocacy groups, payers, and other stakeholders relevant to the model review process; to date, such interest has been limited.

6 Conclusions

Overall, model access during development, if viewed primarily in the contexts of efficiency and validation that have driven the ICER model transparency initiative, seems feasible and could be quite beneficial to all involved parties. Indeed, the importance of this discussion is international in scope, given that nearly all mature HTA bodies develop or critique manufacturer-submitted models to assess the value of new health interventions. Direct and openly public access after model development may be more difficult to resolve because of the lack of funding and incentives. Certainly, achieving model transparency will require improved stakeholder engagement, increased funding by interested parties, and further development of legal assurances to protect intellectual property. The ongoing debate about model transparency is important as we collectively work to improve the development and use of economic evidence to support healthcare decision making. As this process evolves, there is a strong impetus to work together with the interests and constraints of all stakeholders considered. As all health economists—formal and amateur—know, incentives matter. Hence, the key to moving forward is to develop a sustainable approach to reap the benefits of transparency that is robust, objective, and responsive to the various needs of the involved stakeholders.