Keywords

1 Introduction

Resilience has been defined as the ability of a society facing an extreme event to react, adapt to its new environment and recover from the damage incurred. If economics is not a priori well suited to discuss the short-term reaction or decision-making processes of people facing extreme events, it nonetheless offers tools to describe how the risks of nuclear accidents can be anticipated, prepared for, and somehow mitigated in case an accident occurs. This paper will thus focus on the economic assessment of the risks of nuclear accidents. We will emphasize on its major drawbacks and on the policy guidelines this assessment entails.

Risks of accidents are often quantified by the assessment of their expected costs [1]; that is the product of damage by the probability of occurrence. In the case of nuclear power, this definition is particularly inconvenient. Estimating the probability of a nuclear disaster is subject to high uncertainties, and so is the assessment of its damage. Yet, this assessment serves two purposes. First, ex ante cost assessments provide policy-makers with guidelines regarding hazardous activities; such as choosing among various technologies for electricity production, or setting efficient safety standards. Ex ante assessments are thus economically driven. Second, ex post cost assessments allow victims to be compensated according to their losses. This assessment takes place after the accident. It focuses on the evaluation of the different damage to individuals, to society and to nature. Such ex post assessments are usually legally driven (see BP Deepwater Horizon or TEPCO Fukushima-Daiichi payouts).

One major difference between the ex ante and ex post approaches is that the former is confronted with much more uncertainties. The future is less known that the past and the accident is only one possible outcome. The cost to consider is thus an expected cost. Generally speaking, the probability of an event is derived from its observed frequency. Some authors also increase this product by a coefficient to account for risk-aversion. This canonical method in the economics of accident is not easy to apply to nuclear catastrophes. Probabilities cannot be derived from observed frequencies and psychological biases regarding dreadful events are not simply risk-aversion. These issues will be addressed in the first part of this paper, which is divided into two parts. The first part deals with the limitations of observed frequencies and of the so-called Probabilistic Safety Assessment (hereafter, PSA) as a method to estimate the overall probability of nuclear accidents. It argues that knowledge from observed occurrence of accidents and knowledge from safety engineers and experts have to be combined. The second part focuses on the gap between the probabilities of nuclear accidents as calculated by experts and the probabilities of nuclear accident as perceived by individuals. It shows how psychological biases in estimating probabilities are amplifying the perceived risk of nuclear accidents.

Numerous assessments of the cost of a nuclear accident have been performed over the years. This paper will review some of these assessments and present their very different results. We argue that these differences are not peculiar from an economic standpoint. Similar differences are also observed in other hazardous activities, such as car accidents, oil spills or climate change. We then try to identify the origins of the discrepancies in the assessments. The uncertainty characterizing these results stems from three sources: existing studies have a very different scope; they rely on either past data or probabilistic safety assessments (PSA); and they assess the consequences of the accident and their monetary costs with different methodologies. Second, we highlight the fact that most studies focus on damage and try to produce assessments that account for as many consequences of an accident as possible. Even if this is necessary for the aforementioned goals, it fails to provide insights into how the aftermath of nuclear accidents might be mitigated. Indeed, numerous countermeasures are available and can help reduce the impact of a nuclear accident on the economy. Yet, few economic studies try to assess the impact of mitigation policies on the cost of nuclear accidents. Those who do reduce their scope to a specific nuclear countermeasure—such as land decontamination—often yield less uncertain results or clearer insights for policy-makers. Therefore, we stress the need for future research in the economics of nuclear countermeasures, which could provide guidelines for mitigation policies.

2 Evaluating the Expected Cost of Nuclear Power

2.1 Limitations in Estimating the Probabilities of Nuclear Accidents

  • The expected costs of nuclear accidents seen as car crashes

How to compare different technologies to produce electricity? What is the optimal mix of power generation? To answer these questions economists use the levelized cost of electricity (hereafter, LCOE), defined as the price of electricity required to balance discounted costs and benefits throughout a power plant’s service life. From a welfare perspective, the costs and benefits must include both the private costs the operators will incur (e.g., fuel costs) and the external costs society will have to pay for (e.g., polluting emissions). For instance, the UK department of energy and climate change estimates the LCOE of a Combined Cycle Gas Turbine and of a nuclear power plant (hereafter, NPP) to respectively 80 and 90£/MWh for a project starting in 2013 with a 10% discount rate.Footnote 1

As far as nuclear generation is concerned, accidents have to be included in the LCOE. However, this inclusion has a negligible impact because the huge amount of damage is multiplied by an infinitesimal probability. Let’s consider a simple back-of-the-envelope calculation: 1 billion euros of damage and a large early release frequency of 10−5 per reactor-year. This leads to 1 €/MWh, that is about one hundredth of the total cost to produce one MWh from a nuclear plant. According to a recent study on nuclear costs based on a comprehensive literature survey [1], the order of magnitude of external costs due to nuclear accident is between 0.3 and .

Such an approach to compute the social external cost of nuclear power generation is far from satisfactory. It views nuclear accidents as transport crashes. It is reasonable to make cost comparison of different means of transportation because their probabilities can be derived from observed frequencies. For instance, there have been worldwide 90 to 118 annual airplane accidents between 2009 and 2013 (including 9–13 annual fatal accidents with 173–655 fatalities per year).Footnote 2 Based on the frequencies observed during this period, the probability for a passenger to have an airplane accident when embarking at the airport is approximately 3 × 10−6. Road traffic accounts for more than 1 million deaths per year worldwide, mainly pedestrians and motorcyclists.Footnote 3 In a small country like New Zealand, the number of fatalities due to car crashes has been 254 in 2013. It corresponds to a frequency of 0.8 deaths per 10,000 vehicles.Footnote 4 On the basis of 2011–2013 statistics the probability for a driver to be killed is about 3 × 10−9 per km. In 2013, the social cost of fatal car accident is estimated to NZ$ 4.5 million. The expected cost of fatal accident per km can be estimated to 0.03 NZ$, that is about one tenth of gas price. Given the observed frequencies of transport accidents (and assuming the value of loss life is the same and neglecting the other damage), one can easily compare the social accident costs of rail, air, maritime and road transportation per km or per travel. Moreover, data on car accidents can generally be broken down by local area, models of car, types of roads, age categories of the driver, etc. As a result precise probability can be estimated according to different situations.

2.2 Nuclear Accident Are no Car Crashes

Estimating probabilities of nuclear accident from frequencies is a non-sense. Since the first grid-connection of nuclear power plant in 1956 there have been 12 core-meltdowns of reactors, including very limited ones [2]. According to the INES classification there have been 2 major, or level 7, accidents (Chernobyl and Fukushima-Daiichi) and 21 accidents with a level equal or higher to 4. Knowing that since the end 1950s 14,500 reactor-years have passed worldwide the observed frequencies are 1.6 × 10−3 per reactor-year for INES >3; 8.3 × 10−4 per reactor-year for core-meltdown; and 2.7 × 10−4 per reactor-year for INES = 7. Is it sound to infer probabilities from these values? For instance, using a Poisson distribution and knowing that the worldwide nuclear fleet amounts to 435, is it relevant to say that the probability of an INES 7 accident in 2015 on the planet is 0,11 (i.e., [1−(1–2.7 × 10−4)435]?

No! The reasons are twofold. The obvious one is that the number of observations is too small. The observed events cannot be assumed as representative. Reactors are neither identical, nor exposed to the same locational risk (e.g. earthquake, flooding). In addition, nuclear accidents are not independent—the current nuclear fleet is close to the 1980s fleet for more than three quarters of reactor is over 25 years old. Moreover, the evolution of safety performances and standards makes heroic to assume that safety is time-invariant.

A second reason is that assessing the risk of nuclear accidents exclusively on data from past observations implicitly assumes that no other knowledge is available on nuclear safety. It ignores all the works carried out over the past 50 years by thousands of nuclear scientists and engineers on safety. This knowledge has partly crystallized in PSAs. The first large-scale probabilistic assessment was carried out in the US in the 1970s. It was led by Norman Rasmussen, then head of the nuclear engineering department at MIT. PSAs have now been carried out on all nuclear power plants (hereafter, NPP) in the US and many others worldwide. Similarly reactor vendors carry out such studies for each reactor model while it is still in the design stage. For instance, the calculated core meltdown frequency for the UK EPR is 10−6 per year and the core damage with early containment failure is estimated to 3.9 × 10−8 per year.

As for observed frequencies, assessing the risk of nuclear accident based exclusively on PSAs would be unsound. The use of PSAs has strong limitations, too. Firstly, they are not mainly designed to provide a final single number. They are designed to detect exactly what may go wrong, to identify the weakest links in the process and to understand the failures which most contribute to the risk of an accident. Secondly, PSAs have a limited scope. They study known initiating events such as seism or loss of coolant but not all the possible states of the world because the list of all causes and failures is unknown. Thirdly, PSAs assumes perfect compliance with safety standards and regulatory requirements. An implicit assumption is that safety standards are enforced thanks to an independent, competent and powerful safety regulatory authority. All these limitations can explain in part why PSAs figures are much lower than observed frequencies.Footnote 5

If we want to make progress in estimating probabilities of nuclear accident, we have to use all the current available quantitative knowledge and therefore to combine information from PSAs and observed accidents. Escobar-Rangel and Lévêque have made such an attempt [4]. The issue addressed in their paper is to compute the post Fukushima-Daiichi global probability of a core-meltdown. Different models are used including a Poisson Exponentially Weighted Average model to capture the idea that recent accidents are more informative than past ones and to introduce some inertia in the safety performances of the fleet. This model shows that the Fukushima Daiichi accident results in a huge increase in the probability of an accident. The arrival rate in 2011 is similar to the arrival rate computed in 1980s. To put it another way, this catastrophe has increased the probability of an accident for the near future in the same extent it has decreased over the past 30 years owing to safety improvements. This huge effect of Fukushima Daiichi in revising the global estimation of a core meltdown can be interpreted as evidence that besides the design, the location and the operating of reactors the probability of an accident also depends on institutional factors like the strength and ability of nuclear safety authorities, a factor which is not taken into account in probabilistic assessments. In fact, like in Japan there are a lot of countries wherein nuclear safety authorities are captured by operators and fail to enforce safety standards.

As a conclusion, uncertainties prevail. There is no overarching probability of nuclear accident to use to make a rational decision for society to invest in or to phase out nuclear power generation, to determine the right level of nuclear safety expenditures, not to say to identify the economically optimal level of nuclear safety. Unlike transport crashes no means can be inferred from observed frequencies. Moreover, the probability of a nuclear accident differs according to the design and the location of reactors but also according to institutional characteristics (independent regulator, liability rules, experience of operators, etc.). We do not know the probability distribution of nuclear accidents, even for a given reactor design and location. Last but not least, one has always to keep in mind that probabilistic analysis requires knowing all the states of the world. A probability cannot be assigned to an unknown event, or to put it another ways to black swans and unknown unknowns.

2.3 Perception of Probabilities

  • Utility function and human behavior

It is well known that many people are risk-averse: they would rather, for instance, a certain gain of 100 to an expected gain of 110. Since Bernoulli [5], this psychological trait is represented by a concave utility function. The Swiss mathematician opened the way for progress towards decision theoryFootnote 6 through a back-and-forth between economic modeling and psychological experimentation. The latter would, for instance, pick up an anomaly—in a particular instance people’s behavior did not conform to what theory predicted—and the former would repair it, altering the mathematical properties of the utility function or the weighting of probabilities. The works by Allais and Ellsberg were two key moments in this achievement. Following an experiment showing that people with good knowledge of the theory of probability were violating an axiom of expected utility theory, Allais [7] proposed to weight probabilities depending on their value, with high coefficient for low probabilities, and vice versa. Putting it another way, preferences assigned to probabilities are not linear. This is more than just a technical response. It makes allowances for a psychological trait, which has been confirmed by a large body of experimental study: people overestimate low probabilities and underestimate high probabilities.

Another anomaly well known to economists is ambiguity aversion. This characteristic was suggested by Keynes and latter demonstrated by Ellsberg [8] in the form of a paradox. In his treatise on probabilities Keynes [9] posited that greater weight is given to a probability that is certain to one that is imprecise. Ellsberg has shown that just as there is a premium for taking risks, some compensation must be awarded to individuals for them to be indifferent to gain (or lose) with a one-in-two probability or an unknown probability with an expected value of one-in-two. Recent developments in economic theory offer several solutions to tackle this problem, in particular by specifying new types of utility functions, yet again [10]. What is important to keep in mind here is that individuals usually prefer the exposure to a hazard associated with a clearly defined probability—because experts are in agreement—rather than the exposure to a hazard characterized by uncertain or fuzzy probabilities—because experts may disagree. Putting it another way, in the second instance people side with the expert predicting the worst-case scenario.

More recently, Kahneman’s work followed on that of Bernouilli, Allais and Ellsberg. He and his fellow author, Tversky, introduced loss-aversion: individual are more affected by loss than gain [11]. Kahneman also diverged from his predecessors in adopting a more positive approach. Observing the distortion of probabilities is a way to understand how our brain works rather than to build a theory where the decision-maker optimizes or maximizes the outcome. Kahneman’s line of research is comparable to subjecting participants to optical illusions to gain a better understanding how our brain functions. For example, a 0.0001 probability of loss will be perceived as lower than a 1/10,000 probability. Our brain seems to be misled by the presentation of figures, much as our eyes are confused by an optical effect which distorts an object’s size or perspective. This bias seems to suggest than our brain takes a short cut and disregards the denominator, focusing only on the numerator.

2.4 The Effects of Perception Biases on Nuclear Accidents

The overall biases in our perception of probabilities, briefly discussed above, amplify the risk of a nuclear accident in our minds. A nuclear accident is a rare event, so its probability is overestimated. The risk of a nuclear accident is ambiguous. As expert appraisals diverge, people are therefore inclined to opt for the worst-case scenario. The highest probability of accident prevails. Along with plane crashes or terrorist attacks targeting markets, hotels or buses, a nuclear accident is a dreadful event. Rather than acknowledging the probability of the accident, attention focuses exclusively on the accident itself, disregarding the denominator. Moreover, several other common routines or heuristics which have been identified by experimental psychologists distort the probability of a nuclear accidentFootnote 7 and increase our aversion vis-à-vis such a disaster.

As a consequence, public decision exclusively based on perceived risk entails a series of drawbacks. Firstly, it tends to an over-investment in nuclear safety. The perceived risk of a nuclear accident being amplified, the benefits to decrease it seems higher and therefore efforts to reduce it seems more worth to be undertaken. Secondly, the choice of technology is distorted in favor of ways generating electricity which are not less hazardous. Coal is perceived as less dangerous whereas according to data on fatalities it is more [12]. Thirdly, public decisions exclusively based on perceived probabilities could lead to costly premature phase-outs. After Fukushima-Daiichi, the German government decided to accelerate the decommissioning of NPPs. It entails an economic loss estimated to a 100 billion of euros in comparison to the previous more progressive nuclear exit as enacted in the Atomic law passed a few months before the accident [13].

However, public decisions ignoring perception biases can also result in wasting a lot of money. It could be costly to treat the attitude of the general public as the expression of fleeting fears which can quickly be allayed, through call to reason or the reassuring communication of the ‘true’ facts and figures. The reality test, in the form of hostile demonstration or electoral reversals, may substantially add to the cost for society of going back on past decisions ignoring public perception. Nuclear power history is full of cases of abandoned projects after several years of construction. In France, for instance, about 10 billion of euros have been spent to build the fast breeder commercial reactor Superphénix for nearly nothing. It had only produced a modest quantity of electricity when it was shut down.

In short, public decision-making must avoid two pitfalls: ignoring how probabilities of nuclear accident are perceived and exclusively taking them into account.

3 Economic Assessment of Nuclear Damage and Their Insights into Mitigation Policies

3.1 Existing Assessments of the Cost of a Nuclear Accident

  • A review of existing studies

Assessments of the cost of nuclear accidents have been carried out since the mid-seventies and the beginning of probabilistic safety assessments. Since then, numerous studies have been published, and several reviews of these studies exist. Namely, in 2000, the Nuclear Energy Agency published a methodological review in which several cost assessments were described [14]. In 2011, after the Fukushima-Daiichi accident, the German Renewable Energy Foundation performed a calculation of the adequate insurance premium that the nuclear industry would need to pay to fully cover the accident risk. This study also reviewed some existing assessments [15]. The D’Haeseleer report for the European commission also provides a comprehensive review of studies that assess the external cost of nuclear accidents [1]. Finally, the IRSNFootnote 8 published in 2013 an assessment of the cost of severe and major accidents in which other studies were reviewed [16]. Those four reviews reference numerous studies and give a thorough overview of the state of the art literature on the evaluation of the costs of nuclear accidents. As we do not wish to tackle here the question of the probability of nuclear accidents, Table 1 below only presents the studies that assess the cost of nuclear accidents before weighting.Footnote 9

Table 1 A review of existing assessments of the cost of nuclear accidents

Table 1 shows high discrepancies. How can one assess the cost of nuclear accidents at approximately €10 billion [17], while others announce a cost of more than a trillion Euros [21]? First, it can be noticed that all studies do not assess the same cost. Some only focus on the damage to the population (health and food costs), while others try to assess the total impact of the accident on the economy. Yet, this cannot be the only cause of these differences. Indeed, even within cost sections (health, food…), there is little consensus as to which cost section represents the highest share of the total cost. The comparison between the “IRSN-major” [16] and the assessment from the German Renewable Energy Federation [15] embodies this observation: even though it only assesses health, food and production costs, the German study calculates a total cost ten times superior to the IRSN figure, which accounts for a larger panel of consequences.

3.2 The Assessments of the Costs of Other Hazards Exhibit Similar Discrepancies

This is not specific to nuclear power: other hazardous activities exhibit the same kind of discrepancies. In 1995, Elvik studied the assessments of the cost of car accidents in twenty countries. This work was motivated by the observation of large disparities in the evaluation of this cost: while the Netherlands were evaluating the total cost of a car accident at U.S. $0.12 million, Switzerland estimated it at U.S. $2.5 million [25]. This study argued that the deviation was caused by the lack of common methodology in the assessment of the cost and the consideration by only a limited number of countries of the value of the lost quality of life.

The estimations of the damage caused by oil spills are also prone to large disparities. In 1995, Cohen assessed the damage of the Exxon Valdez oil spill. She claimed that the upper bound of the estimation of the damage caused by the oil spill in the first two years following the disaster was U.S. $155 million [26]. In 2003, another study assessed the cost of this oil spill at approximately U.S. $2.8 billion [27]. In these studies, Cohen limited her assessment to the costs incurred by southcentral Alaska’s fisheries, while Carson assessed the population’s willingness-to-pay to restore the lost passive use of the damaged environment.

Finally, the climate change literature also exhibits large discrepancies. In a review published in 2009, Tol shows that there is little agreement on the long term effects of climate change. While some authors [28] predict an overall small positive effect due to the heating of cold regions, others predict dramatic consequences [29]. This overview highlights the fact that the uncertainty pertaining to cost estimates can originate from several sources.

4 Uncertainties and Mitigation Policies

If these discrepancies are not peculiar from an economic standpoint, it is nonetheless interesting to try and understand why they occur. In basic economic theory, costs are often defined as anything that causes a loss of welfare [30]. The cost of a nuclear accident can thus be defined as the gap between the welfare levels obtained with and without its occurrence. This theoretical definition induces divergence in the assessments: the consequences of an accident—direct or induced by mitigation countermeasures—are so numerous and intricate that it is impossible to be sure that all consequences have been accounted for properly. Studies differ first in their assessment of the consequences of the accidents, and then on the monetary valuation of these consequences.

4.1 Cost Assessments Do not Speak the Same Language

First, it seems obvious that results will be different if the type and location of accidents assessed are different. Nuclear plants are highly sophisticated, so there is a large panel of possible accidents which do not have the same consequences. Likewise, nuclear plants are located in areas which are not equally densely populated [31]. As an example from Table 1, Hohmeyer’s study calculates the external cost of a hypothetical Chernobyl-like accident in the Biblis nuclear plant (Germany) in 1990. The IRSN study calculates the social costFootnote 10 of a hypothetical DCHFootnote 11 nuclear accident in France in 2025. The scopes of these two studies are radically different. More generally, the comparison of the studies presented in Table 1 is impossible because they do not stand on common definitions.

Similarly, the boundaries of a cost assessment also need to be clearly defined. In 2006, two reports on the consequences of the Chernobyl accidents were published. Their assessments of the number of radio-induced cancers differed by a factor ten. The IAEA/WHO report [32] focused on the consequences of the accident in Belarus, Ukraine and Russia, while the TORCH report accounted for all consequences across Western Europe [33]. Nuclear accidents can have cross borders consequences, so it is paramount to define clearly the boundaries of cost assessment studies to fully understand their implications for public policies. As an example, Hayashi and Hughes have shown that the Fukushima-Daiichi accident had an impact on the electricity bill of households in gas-intensive countries such as the United-Kingdom or South-Korea [34]. How and by whom should these impacts be accounted for?

Finally, the statistical choices in the presentation of the cost are also crucial for their comparison or their use in policy making. There is for example no consensus as to whether the cost of a nuclear accident should be presented as a distribution function or as a single number. The IRSN decided to produce a median cost so that decision makers know that there is a 50% chance for the cost of an accident to be above or below the result. Conversely, the 2011 study from the German Renewable Energy Federation provided an average maximum value in order to calculate an “adequate” insurance premium.

4.2 The Aftermath of a Nuclear Disaster: PSA or Past Events?

The consequences of a nuclear accident are numerous and intricate. An accident has on-site consequences, such as casualties, highly-irradiated workers or material losses in adjacent reactors. It also causes off-site consequences, such as the release of radioactive materials in the atmosphere, the collective absorbed dose, the area of contaminated lands or the quantity of crops and cattle contaminated. The negative consequences of the countermeasures, such as psychological distress, also have to be estimated. Yet, these numerous consequences have to be assessed in order to derive their monetary value.

The source of divergence in the assessment of the consequences is twofold. First, all studies do not assess the same range of consequences. Some studies argue that health effects dominate all other effects [15, 19, 20]. They thus focus on the collective absorbed dose and neglect other consequences. Other studies focus on a wider panel of effects, such as land exclusion, or image effects (tourism, regulatory changes…). Second, studies also differ in their assessment strategies. Physical consequences can be modelled by dedicated programs (MACCS, COSYMA… [35]) that rely on level-three probabilistic safety assessments; or assessed by adapting the figures derived from past catastrophes. Most studies performed in the early nineties were based on Chernobyl’s figures, and find particularly high values for the total cost of the accident [1921]. More recently, another very high cost was assessed by the German Renewable Energy Foundation which happens to be also based on Chernobyl’s figures. This observation raises an important question. Can we assess future accidents solely by using the consequences of past catastrophes? A preliminary answer is that we cannot. Relying on past figures fails to account for the learning from past consequences, the enhancement of safety standards, and the progress in available mitigation technologies.

4.3 Converting Consequences into Costs Requires Various Hypotheses and Assessment Methodologies

Once the consequences of a nuclear accident have been assessed, they have to be given a monetary value. Indeed, a cost is the monetary valuation of foregone welfare. Among the consequences discussed previously, some welfare losses are easily derived (cost of material losses). For other physical consequences, various hypotheses are required to bridge the gaps in our limited knowledge. Regarding health issues, we do not know precisely the effect of exposure to low doses on cancer or hereditary diseases probabilities. Regarding the environmental impact of an accident, the size of lost lands depends on the geographical spread of the radioactive materials and on the acceptable radioactivity threshold that a population can bear. The consequences on food are also uncertain since the population can react to food-bans by boycotting healthy products. The harm caused by nuclear countermeasures, such as psychological distress due to relocations, is also hard to assess. Some hypotheses substantially differ from one study to another. As an example, the excess rate of radio-induced cancer varies from 5 to 10% in the assessments presented in Table 1.

Some of these welfare losses such as reduced tourism, strengthened safety standards for nuclear plants, or higher energy prices, can easily be given a monetary value. They are assessed through macroeconomic methods such as the IO-table method. Yet, all welfare losses caused by nuclear accidents are not necessarily monetary. Therefore, some methodologies have been developed in health and environmental economics in order to give monetary values to non-monetary losses. Environmental losses can be assessed by the evaluation of individuals’ willingness to pay (WTP) to avoid these losses. Two families of methods allow the assessment of this WTP: the revealed-preference methods and the stated-preference methods. Revealed-preference methods such as the travel cost method or the hedonic pricing method, use past individual behaviors to infer the value of environmental losses. These are hard to apply to nuclear accidents because they rely on past behaviors and thus require data [3638]. Stated-preference methods are based on surveys that try to elicit the willingness to pay of people to restore the environment. The contingent valuation method is often used to value the environmental consequences of rare disasters.

Regarding health costs, the human capital method calculates the economic value of fatalities or impairments by assessing the number of lost years of production and multiplying it by the average yearly production of a human being. Other methods, such as the friction cost method, exist and have very different ways of calculating those health costs [3942]. This variety of methods is responsible for some of the discrepancies observed in Table 1. First, a consequence can be assessed by different methods. Second, even if a cost is assessed by two studies with the same method, some aspects of the evaluation remain quite arbitrary. In the human capital method applied to the cost of radio-induced cancers, Hohmeyer assesses the cost of a death at $1 million while Ottinger assesses it at $4 million [15].

4.4 Drawbacks

Table 1 shows that the tendency over the last twenty years has been to provide an estimation of the cost of nuclear accidents which would account for as many consequences as possible. This emphasis on completeness, which is particularly stressed in the IRSN study, is indeed necessary for the goals mentioned in the introduction of this paper. Ex ante policy making and ex post compensations both need to rely on a complete assessment of the consequences of a nuclear accident, since an incomplete assessment might lead to an underestimation of the cost and entail an underinvestment in nuclear safety, a disproportionate share of nuclear power in the electricity mix, or an inadequate compensation of victims [43, 44].

This quest for completeness also has its drawbacks. First, it fosters the aggregation of numbers that differ by nature. As we have seen, all costs of consequences are not assessed using the same methodologies, and are thus not subject to the same uncertainties. Summing them up to provide a global cost of nuclear accidents propagates the highest uncertainty to the final result. Second, completeness can be detrimental to scientific rigor. Some costs currently have no corresponding assessment methods. It is the case for food bans; which are estimated in the most recent study by comparison with recent non-nuclear food bans [16]. Their integration in cost assessments is thus parochial, since they don’t stand on robust economic grounds. Finally, to achieve completeness, existing studies have focused on damage and consequences, and tried to identify new consequences, or “lines of cost”. By doing so, most studies overlook the impact of nuclear countermeasures, which is the object of the next part of this paper.

4.5 Cost Assessment Fails to Provide Guidelines into Mitigation Policies

Current research on cost assessment focuses on providing complete assessments by identifying more and more consequences of an accident. This trend is necessary, but is not adapted to mitigation policies. First, the theory of “sunk costs” [45] explains that once a cost has been incurred, it is no longer relevant for decision making regarding the future. In the case of mitigation policies, the capital losses due to the destruction of a power plant are incurred at the time of the accident. Those losses are an example of sunk costs, and should thus not enter the mitigation policy decisions. Current estimates, as they account for all kinds of losses regardless of the time at which they are incurred, cannot be used in the determination of mitigation policies. This observation raises one question: can we expect cost assessments to provide useful guidelines for mitigation policies?

We believe it can. Cost-benefit analysis (CBA) of countermeasures could provide at least three useful insights regarding mitigation policies. First, it was shown by the report on the consequences of Chernobyl that countermeasures are costly [32]. Cost-benefit analysis could thus help determine which countermeasures are most efficient by comparing their costs to society with the valuation of the prevented damage. Second, there are numerous countermeasures that address the same harmful consequences. Some measures are substitutes (emergency relocation and confinement), while others are complements (iodine prophylaxis and confinement). Hence the assessment of their costs and benefits could help policy-makers identify tradeoffs or synergies when implementing several countermeasures. Finally, the consequences of a nuclear accident do not happen all at once. Cost-benefit analysis is thus a good tool to search for the optimal inter-temporal allocation of mitigation resources.

This kind of assessment is already carried out in other hazardous activities such as car accidents or biosecurity [46, 47]. In the case of nuclear power, Munro studied the tradeoff between long-term relocation and land decontamination. As radioactive decay reduces the cost of land decontamination over time, he calculated the optimal decontamination date which occurs approximately ten years after the accident [48]. Other studies also focus on particular tradeoffs between countermeasures, namely land decontamination and food restrictions [49, 50]. Yet, these studies focus on multi-criteria decision making rather than on performing a CBA of countermeasures.

Existing studies that deal with mitigation only focus on long-term countermeasures. Being able to deal with emergency countermeasures is a barrier that needs to be overcome if CBA is to provide guidelines for mitigation policies. Indeed, an important tradeoff has to be solved right after the accident, and concerns the confinement or the emergency relocation of populations. A question for future research is whether CBA can deal with this emergency. Indeed, the optimal mitigation scheme cannot be determined ex ante, as it requires ex post data such as the plant impacted or the weather and its impact on the path of the radioactive materials dispersed in the atmosphere.

5 Conclusions

Regarding the estimation of the probabilities of nuclear accidents, two directions of research could be worthwhile. First, more research seems necessary to get a better knowledge on the uncertainties related to these probabilities. It includes uncertainties propagation in PSAs event trees, combinations of observed frequencies and PSAs and the use of new probability axiomatic such as imprecise probability theory. Second, more research is needed on methodologies that could help law-makers to make decision based both on probabilities as perceived by individuals and probabilities as calculated by experts.

This paper also raises two research questions regarding the cost of nuclear damage. The first is whether assessing the cost of nuclear accidents using the figures derived from past events is a robust method. As it fails to account for safety enhancements, progress in mitigation technologies, and learning from past catastrophes; it can drive cost assessments upwards, provide pessimistic numbers and entail overinvestments in safety or an unbalanced electricity technology mix. The second question is whether cost assessments should only focus on ex ante policy making and ex post compensations. We believe that cost assessments should also be used in order to improve mitigation policies.