Exploring Uncertainty in Cost-Effectiveness Analysis
- 504 Downloads
This paper describes the key principles of why an assessment of uncertainty and its consequences are critical for the types of decisions that a body such as the UK National Institute for Health and Clinical Excellence (NICE) has to make. In doing so, it poses the question of whether formal methods may be useful to NICE and its advisory committees in making such assessments. Broadly, these include the following: (i) should probabilistic sensitivity analysis continue to be recommended as a means to characterize parameter uncertainty; (ii) which methods should be used to represent other sources of uncertainty; (iii) when can computationally expensive models be justified and is computation expense a sufficient justification for failing to express uncertainty; (iv) which summary measures of uncertainty should be used to present the results to decision makers; and (v) should formal methods be recommended to inform the assessment of the need for evidence and the consequences of an uncertain decision for the UK NHS?
KeywordsProbabilistic Sensitivity Analysis Decision Uncertainty Positive Guidance Uncertain Decision Expensive Model
This paper was initially prepared as a briefing paper for NICE as part of the process of updating the Institute’s 2004 Guide to the Methods of Technology Appraisal. The work was funded by NICE through its Decision Support Unit (DSU), which is based at the universities of Sheffield, Leicester, York, Leeds and at the London School of Hygiene and Tropical Medicine.
The author has no conflicts of interest that are directly related to the contents of this article.
The author thanks members of the DSU who commented on the briefing document that forms the basis of this paper as well as Iain Chalmers, Alex Sutton, Alan Brennan, Louise Longworth and Carole Longson, who provided helpful comments on earlier drafts of this paper. All errors and omissions are the responsibility of the author.
- 8.Briggs A, Claxton K, Sculpher MJ. Decision analytic modelling for the evaluation of health technologies. Oxford: Oxford University Press, 2006Google Scholar
- 11.Griffin S, Claxton K, Palmer S, et al. Dangerous omissions: the consequences of ignoring decision uncertainty. Med Dec Making 2007; 27 (1): E8Google Scholar
- 13.National Institute for Health and Clinical Excellence. NICE guide to the methods of health technology appraisal. London: NICE, 2004Google Scholar
- 14.Cooksey D. A review of UK health research funding. London: Stationery Office, 2006Google Scholar
- 15.Office of Fair Trading. The pharmaceutical price regulation scheme: an OFT market study. London: Office of Fair Trading, 2007Google Scholar
- 19.O’Hagan A, Luce BR. A primer on Bayesian statistics in health economics and outcomes research. Bethesda (MD): Bayesian initiative in health economic and outcomes research, MEDTAP International, 2003Google Scholar
- 22.Ades AE, Sutton AJ. Multiparameter evidence synthesis in epidemiology and medical decision-making: current approaches. JRSS (A) 2005; 16 (1): 5–35Google Scholar
- 24.Bojke L, Claxton K, Palmer S, et al. Defining and characterising structural uncertainty in decision analytic models [CHE research paper 9]. York: Centre for Health Economics, University of York, 2006Google Scholar
- 26.Draper D. Assessment and propagation of model uncertainty. J Royal Stat Soc 1995; 57: 45–97Google Scholar
- 28.Bojke L, Claxton K, Bravo Vergel Y. Using expert elicitation to resolve issues of structural uncertainty in decision analytic models. Med Dec Making 2007; 27 (1): E8Google Scholar
- 40.Pratt J, Raiffa H, Schlaifer R. Statistical decision theory. Cambridge (MA): MIT Press, 1995Google Scholar
- 46.Griffin S, Claxton K. Interpreting the expected value of perfect information about parameters. Med Dec Making 2007; 27 (1): E8Google Scholar
- 48.Conti S, Claxton K. Dimensions of design space: a decision theoretic approach to optimal research design [CHE research paper 38]. York: Centre for Health Economics, University of York, 2008Google Scholar