Advertisement

Water Resources Management

, Volume 32, Issue 2, pp 511–525 | Cite as

Of Dreamliners and Drinking Water: Developing Risk Regulation and a Safety Culture for Direct Potable Reuse

  • Christian Binz
  • Noosha Bronte Razavian
  • Michael Kiparsky
Article

Abstract

Direct potable water reuse (DPR), the injection of highly purified wastewater into drinking water systems, is among the newest, and most controversial, methods for augmenting water supplies. DPR is garnering increasing interest, but does not come without risks. This paper examines the notion that emerging regulation of DPR may lack sufficient attention to a particular class of risks: catastrophic risks with low probabilities of occurrence, but high consequences. It may be instructive for proponents of DPR that such consequences have materialized in other industries, with damage to human welfare and to the industries themselves. We develop brief histories of risk regulation from the aviation, offshore oil, and nuclear industries, drawing out relevant lessons for the emerging DPR field. We argue that proponents of DPR could benefit from proactively developing a safety culture in DPR utilities and establishing an effective industry-wide auditing organization that investigates unanticipated system failures. Developing independent oversight for DPR operation could ensure that stringent quality and management requirements are set and enforced, and that any system failures or “near misses” are investigated and adequately responded to.

Keywords

Water reuse Water recycling Safety culture Wastewater Water treatment Drinking water Urban water Innovation 

1 Introduction

As growing populations, climate change, and infrastructure deterioration challenge water systems around the world, urban areas seek new ways to augment their water supplies. Potable water reuse is one such option that has recently received significant attention (National Research Council 2012). Direct potable water reuse (DPR), in which highly purified wastewater is introduced directly into a drinking water system without any natural buffer, is among the newest, and most controversial, potable reuse methods.

Overall, the idea of purifying wastewater and returning it to taps is not new (Leverenz et al. 2011; WateReuse Association 2015). Extensive (indirect) potable reuse systems are already planned or in operation in Namibia, Singapore, California, Texas, Arizona, and Florida (Gerrity et al. 2013; National Research Council 2012; Ormerod 2016; Tchobanoglous et al. 2011). Since the late 1960s, water utilities in Southern California have successfully operated systems that recharge groundwater aquifers with purified wastewater (Harris-Lovett and Sedlak 2015). Texas has some of the first direct potable reuse systems in operation in Big Springs and Wichita Falls (WateReuse Association 2015) and California is evaluating whether and how to regulate the new practice at a State level (SWRCB 2016). Still, despite the imminent diffusion of potable reuse schemes, important questions remain about the feasibility, sustainability, and safety of this new technology.

New technologies, particularly ones with obvious connections to public health, often come with risks, some of which can be difficult to characterize. History is replete with new technologies that failed catastrophically, defying the imaginations of the best engineers of their time – the Titanic and the Hindenberg provide dramatic examples, but similar events recur, such as Fukushima and the Deepwater Horizon oil spill. Often a chain of small failures combined with unanticipated conditions (“unknown unknowns”) and inadequate organizational and human response lead to the failure of systems considered ‘fail-safe’ (Elahi 2011; Leveson 2004; Parsons 2007; Pawson et al. 2011). Potable reuse technologies - and DPR in particular - face similar risks. As DPR begins to diffuse, important questions arise about how to manage both well-understood and still unknown risks facing communities that seek to employ DPR.

Recently, regulators, consultants, advocacy groups, and academicians have made remarkable progress in developing protocols and draft regulations that anticipate some key health risks of DPR technology (Crook 2010; NWRI 2013; Tchobanoglous et al. 2011; WateReuse Association 2014; WateReuse Association 2015). In this paper, we will argue that these recent research and regulatory efforts around DPR have missed a key risk-related point that may endanger the technology’s future safety and legitimacy. Lessons from other industries with similar risk profiles indicate that the proponents of DPR technologies may underprepare for a specific type of ‘low probability, high consequence’ system failures. The public health consequences of a catastrophic failure in DPR systems thus warrant more careful risk management strategies that explicitly account for risks arising from the social component of complex technologies like DPR.

The paper is organized as follows. We first discuss conceptual differences between two classes of risks relevant to potable reuse, and assess how the industry is currently addressing them. We then develop stylized histories of risk regulation from the aviation, offshore oil, and nuclear industries, drawing out relevant lessons for DPR. We close by arguing that proponents of DPR could benefit from proactively developing a safety culture in DPR utilities and establishing an industry-wide auditing and emergency response organization. Developing independent oversight for DPR operations, within a government agency or an industry-driven voluntary organization, could ensure that stringent quality and management requirements are implemented and enforced, and that any system failures or “near misses” are investigated and responded to adequately.

2 Two Types of Risks for Complex Engineered Systems

Risk can be defined as the product of the probability of an event occurring and the consequences of that event (Kaplan and Garrick 1981). However, risk manifests in very different ways depending on the nature of these probabilities and consequences. One important conceptual distinction exists between risks with ‘High Probability of occurrence but Low Consequences’ (HPLC risk, e.g., flight delays), and ‘Low Probability risks with High Consequences’ (LPHC risks, e.g., airplane crashes) (Luxhoj and Coit 2006; Waller and Covello 1984). LPHC risks in complex technological systems include catastrophic, large-scale events that create considerable negative externalities, and resist market-like solutions such as compensation for risk (Camerer and Kunreuther 1989). LPHC risks are more difficult to perceive accurately (Slovic 1987), let alone manage effectively (Camerer and Kunreuther 1989; Taleb 2007). Also, the relevance and magnitude of LPHC events is often misperceived by key stakeholders, regulators, and individuals (Kahneman and Tversky 1979; McClelland et al. 1990; Slovic 1987). Given their low probability and the challenge of formally integrating them into management procedures and regulations, LPHC risks are often only implicitly considered, if considered at all (March and Shapira 1987). For newly emerging technologies like DPR, the lack of long-term performance data exacerbates this lacuna.

HPLC risks, in turn, are familiar in the water sector, and arguably well managed. For example, the U.S. Safe Drinking Water Act requires U.S. EPA to establish health-based standards for chemicals in drinking water and to continuously update its list of unregulated contaminants (SDWA 1974). Standards for particular chemicals are based on public health assessments of the potential adverse health effects and the prevalence of the chemical in water systems, established as part of a “multiple barrier” approach to drinking water protection, taking into account cost and available treatment technology (US EPA 2004). The epidemiological studies of long-term exposure to low levels of contaminants are classic examples of HPLC risk assessment. This approach is effective for addressing the most commonly considered and anticipated health risks in the water sector, and is the foundation for much of public health management and regulation more generally.

The situation is quite different for LPHC risks. Conventional water quality regulations do not fully capture the LPHC risks of acute failures of complex systems like DPR. In California, for example, the proponents of DPR suggested mitigating LPHC risks through the implementation of redundant technological safety systems, extensive quality monitoring procedures, and operator certification requirements (SWRCB 2016; WateReuse Association 2015). The ‘DPR framework report’ mentions ‘Hazard analysis and critical control point’ (HACCP) systems, fully automated ‘Supervisory Control and Data Acquisition’ (SCADA) as well as early warning systems (EWS) as the most important technical means to prevent from unanticipated system failures. In addition, the document discusses regulatory reforms, source control programs, monitoring of contaminants in DPR plant effluent, as well as coordinated communications and outreach activities as key elements of a comprehensive risk mitigation strategy (WateReuse Association 2015).

The question of whether these measures are sufficient to avoid all LPHC risks - and in particular human operation errors - has become something of an Achilles heel for the DPR industry. For example, a terrorist or inattentive employee allowing pathogens and toxic chemicals to bypass the purification process may threaten the health of millions of residents, despite the existence of sophisticated treatment technology, process control, and regulation. Indeed, many studies show that the most important factor in the avoidance of LPHC risks “is management commitment to safety and the basic safety culture in the organization or industry” (Leveson 2004: 240). Operation and maintenance, utility management, leadership, cultural factors, and wider institutional environments all influence the occurrence of LPHC events and thus generate particular forms of uncertainty for complex technical systems. In many cases of catastrophic technology failures, the technology and its inherent safety systems functioned without issue, while the interfaces with human actors failed in unpredictable ways (Leveson 2004). Emblematic examples include accidents at Bhopal or Three Mile Island (Kahn 2007) or the crash of Air France flight 447 (BEA 2012).

In the water sector, the 1993 Cryptosporidium outbreak in Milwaukee, WI, USA illustrates how LPHC events can occur even when water utilities adhere to conventional regulatory standards. From 1992 to 1993, hundreds of Milwaukee residents called the utility to voice concern over their discolored water and its odor (Behm 2013). However, the failure of its wastewater plant operators to properly monitor water quality and respond to user feedback permitted the emergence of Cryptosporidium within the filtration systems, which eventually caused over 400,000 people to fall ill (Behm 2013; Mac Kenzie et al. 1994).

A key challenge for new technologies with health risks like DPR is thus to develop a comprehensive risk management system that explicitly includes LPHC risks. This task can seem daunting because of the complex social context for new technologies and “unknown unknowns,” which are, by definition, impossible to evaluate upfront (Elahi 2011; Parsons 2007; Pawson et al. 2011). A key insight from safety science is that while LPHC risk can never be fully avoided, the establishment of a safety culture in organizations and robust response mechanisms can provide an important safety net (Leveson 2004; Roughton and Mercurio 2002). Given the still underexplored uncertainties and risks of DPR systems, having an effective safety net for LPHC risks could be particularly important.

3 Risk in Potable Reuse Systems

DPR systems pose a novel public health risk because they directly link treated wastewater streams with the drinking water supply without an environmental buffer within which a pulse of contamination could be attenuated. In concept, three broad classes of risks result from this interconnection. The first are conventional HPLC risks that are closely related to the public health risks faced by any treated drinking water system. Source water contains pathogens and chemical contaminants. However, many of the most common water-related pathogens and contaminants have been well researched, and adequate treatment and purification technologies exist such that wastewater can be treated to a high standard (National Research Council 2012). From a technical perspective, conventional HPLC-related health risks of DPR systems are relatively manageable, since advanced treatment systems can reliably eliminate pathogens consistent with public health guidelines.

A second class of risk pertains to emerging contaminants, those which may be present in source water and may pose risks to human health, but which have not yet been systematically researched and regulated. Examples include pharmaceuticals and personal care products, surfactants, industrial additives, and chemicals purported to be endocrine disrupters (Bolong et al. 2009). In particular, some chemicals are not metabolized, and reach wastewater treatment plants through sewers. The occurrence of such contaminants is not unique to DPR. De facto reuse of drinking water occurs in many water supply systems, since most drinking water intakes are localized in places that are hydrologically connected to effluent from upstream wastewater treatment plants (Rice et al. 2013). However, the magnitude of reuse tends to be low because of dilution in natural water bodies (Rice et al. 2013). To the extent that DPR results in higher concentrations or greater occurrence of emerging contaminants, managers of these projects may face greater challenges for removal, or may face regulatory risks where future regulations result in new treatment needs (Crook 2010).

A third class of risks involves LPHC risks such as a large-scale pathogen release or an intentional spill of dangerous chemicals into a drinking water system. Such events could have more acute and severe health impacts, although the nature of treatment and distribution systems makes the specter of a city being served undiluted raw sewage quite improbable. Nevertheless, a distinguishing characteristic of such LPHC events is that they carry risk not only to public health, but also to the industry as a whole. A major system failure in a DPR plant could have extensive negative spillover effects including an irreversible loss of public trust in the technology. The core meltdown in the Three Mile Island nuclear reactor in 1979 effectively stalled the development of the U.S. nuclear industry for more than 30 years (National Commission 2011). Unexpected seismic disturbances similarly stopped the exploration of ‘hot rock’ geothermal energy sources in Switzerland and Germany for almost a decade (Dowd et al. 2011). Particularly in the emergent phase of a new industry, a catastrophic system failure may delegitimize a technology for an extended period of time (Harris-Lovett et al. 2015).

DPR is subject to similar LPHC risks related to the emergent state of the industry and the technology’s complex interfaces with human actors, which did not exist in conventional drinking water and wastewater treatment operations. Safety science argues that these types of LPHC risks can be mitigated by developing an industry-wide ‘safety culture.’ Safety culture is defined as “the attitudes, beliefs, perceptions, and values that employees share in relation to safety” (Cox and Cox 1991). It is a complex construct, which depends on the broad institutional contexts of an industry as well as the concrete cultural tradition of specific organizations (for a comprehensive discussion see Guldenmund 2000). As such, it depends on interventions at the level of specific utilities as well as in state and federal regulations, organizational reforms, and industry-wide standardization procedures.

Recent framework documents and draft regulations for DPR (mostly from California, which is most advanced in developing regulations) contain elements pertinent to the creation of a safety culture, but also show key gaps in preparing for human-induced system failures. The State of California recently enacted legislation creating a pathway towards the regulation of DPR (Cal. Water Code §§13,560–70), mandating two reports from external panels that were produced in December 2016. The advisory group report (SWRCB 2016) contains recommendations that explicitly focus on the human dimension of this complex technology. For example, the report recommends that Advanced Water Treatment Facility operators obtain special training and certification. This certification would include training on wastewater treatment and advanced water treatment, public health components of DPR, emergency response procedures, and drinking water regulations. Certification, developed in conjunction with industry associations, would be administered by the State Water Resources Control Board (SWRCB). While these recommendations are clearly related to elements of safety culture, the report does not seek to codify broader recommendations on the establishment of an industry-wide safety culture.

In the next section we argue that this is a key shortcoming. The experience from other industries with LPHC risk profiles similar to DPR shows that additional operator training certification as well as continuing the regulatory tradition set by the Safe Drinking Water Act will not sufficiently guard DPR activities against LPHC risks. Rather, more comprehensive risk management systems should be established, that tackle elements of safety culture at various levels.

4 Risk Regulation in the Aviation, Offshore Oil, and Nuclear Industries

While some water infrastructure itself faces LPHC risks (e.g., dam failures or pathogen outbreaks), we focus on examples from the aviation, offshore oil, and nuclear industries, in part to enable cross-sector learning. More importantly, these industries employ complex engineered systems with technology-user interfaces that induce significant human-induced LPHC risks similar to DPR (National Commission 2011). Yet, in contrast to DPR, they have each experienced catastrophic system failures rooted in human (rather than technological) error, or in problems with the interaction of human and technological systems. Regulators in these industries have learned the hard way that, although catastrophic system failures can never be completely ruled out, their probability and impacts can be significantly reduced by developing a comprehensive and consistent safety-related institutional framework around complex technologies.

4.1 LPHC Risk Mitigation in Aviation

The aviation industry provides an illustrative case of successful sector-wide safety and quality culture, which significantly reduced its incidence of fatal accidents (Allianz 2014). Recognizing that regulation and independent oversight by the U.S. Federal Aviation Administration (FAA) was crucial to improving safety standards from its early days, we focus on a particular organizational reform in the U.S. that played an important complementary role in identifying and mitigating LPHC risks.

On December 20th, 1995, American Airlines Flight 965 crashed near Cali, Colombia, killing 168 passengers. The crash was attributed to a sequence of miscommunications between crewmembers and air-traffic control (Pronovost et al. 2009). The accident brought to light the limitations of the existing risk governance system, and prompted the voluntary formation of a partnership aimed at reducing fatality risks. This partnership, the Commercial Aviation Safety Team (CAST), includes governmental agencies such as the FAA, the Department of Defense (DoD), all major aircraft manufacturers, airline and pilot unions, as well as various international aviation safety departments and NGOs (CAST 2017). Its mission is to enable a rigorous, data-driven, continuous improvement framework to mitigate accident risk in aviation. It has framed its goals in terms of outcomes for airline safety, specifically as rates of decreased airline fatalities with a goal of an 80% reduction over ten years.

Representatives from government and industry serve as CAST’s co-chairs, while the group’s executive committee consists of senior officials from each member organization who have the authority to commit their organizations to specific risk-mitigating interventions (Pronovost et al. 2009). The co-chairs lead a group of senior safety officials from CAST organizations, which meets frequently and manages several teams that carry out parts of CAST’s analyses (CAST 2017). CAST is funded solely by its members’ voluntary donations of expertise, time, and travel costs (Pronovost et al. 2009).

CAST focuses effort on data collection and analysis, which underlie its recommendations. It works to identify, analyze, and minimize risks from LPHC events, such as controlled flight into terrain, extreme weather conditions, in-flight loss of control, and landing accidents. CAST’s subdivisions rely on stakeholders within the aviation industry to voluntarily collect and share data (Angers 2009). Specialized teams review data and investigation reports from accidents and reported incidents and try to identify risk-related similarities between them.

Initially, CAST began by analyzing data from over 500 accidents and thousands of safety mishaps from around the world (Angers 2009), eventually focusing on six accident categories. Specialized teams analyze each accident, and calculate the probability of a similar accident occurring in the future (Pham et al. 2010). Next, CAST develops possible solutions to minimize their future occurrence. The effectiveness of proposed interventions is subsequently evaluated and proposed safety enhancements are compiled into a comprehensive Safety Plan (Pronovost et al. 2009). Lastly, CAST distributes their recommendations and the estimated costs of implementation to regulators and member organizations.

CAST’s safety enhancements are voluntary. There is no guarantee that members of the industry will actually adopt CAST’s recommendations. However, not complying exposes an organization to higher risk of fatal incidents and a loss of reputation, so most members comply; between 1998 and 2009, CAST had developed more than seventy safety projects, most of which were implemented in the USA without regulatory action (Pronovost et al. 2009).

Available data suggests that CAST has been effective in mitigating LPHC risks. According to its own (optimistic) calculations, in its first ten years its safety improvements decreased commercial aviation fatality rates by 83%. Others compute lower numbers (FAA 2010), but an overall trend to increased safety is clearly observable in aviation, particularly given growing passenger numbers (Allianz 2014). CAST estimates that its initial safety enhancements save the industry around $600 million a year, mostly in the form of avoided costs such as accidents, devalued stock prices, insurance fees, and legal costs.

The case of CAST thus illustrates how government agencies and industry can work together to address societal and technical risk factors around a complex technological system. Other regions around the world followed CAST’s model and established similar organizations in Europe and Asia (Angers 2009).

4.2 Risk Mitigation in the Offshore Oil Industry

The offshore oil industry experienced a high-profile failure in 2010, and the reforms following this catastrophe are similarly instructive. On April 20, 2010, the Deepwater Horizon, an offshore drilling rig leased by British Petroleum (BP), experienced a blowout that led to the largest oil spill in U.S. history, with immediate and long-term impacts on coastal communities and marine life (CSB 2016; National Commission 2011). Investigation of the accident uncovered grave shortcomings in the oversight and regulation of offshore oil rigs as well as systemic deficiencies in safety procedures for BP and its partners. Weak risk governance in the industry was a key enabling factor for the spill (CSB 2016; DHSG 2011; National Commission 2011). Further, the United States Mineral Management Service (MMS) was accused of corruption and ineffective enforcement of safety standards (National Commission 2011).

Until 2010, offshore oil production was ostensibly regulated by the MMS in a traditional single-agency oversight model. In the wake of the spill, the MMS was divided into three different divisions: the Office of Natural Resources Revenues (ONRR), the Bureau of Ocean Energy Management (BOEM), and the Bureau of Safety and Environmental Enforcement (BSEE) (Theriot 2014). Dividing the agency was intended to reduce conflicts of interest arising from the regulatory oversight of an industry that simultaneously provided funding for the agency. Each division received a more focused and compartmentalized role (Theriot 2014). BSEE became responsible for overseeing and auditing the industry (BSEE 2016), ONRR for revenues associated with federal offshore and onshore mineral leases, and BOEM for safety and sustainability in the U.S. Outer Continental Shelf energy and mineral resources development.

Another major transition was from a voluntary safety management scheme to a mandatory one (National Commission 2011; Theriot 2014). Prior to the accident, MMS and the American Petroleum Institute (API) had tried several times to establish an effective safety system for offshore oil rigs. Both the oil and gas industry and Congress had opposed these proposals and regulations were only implemented as voluntary, recommended safety practices (National Commission 2011). After the accident, a mandatory Safety & Environmental Management System (SEMS) was developed and implemented. SEMS became integrated into offshore drilling operations, providing a way to hold operators accountable for the safety of an offshore oil plant (Theriot 2014). They were structured to include:
  • Standard operating procedures, safety procedures and training, guidelines for hiring contractors, regular equipment checks, emergency response procedures, recordkeeping, and investigations of any incidents;

  • Periodic independent audits of facilities with prior notification to BSEE;

  • Submission of annual safety and environmental data and any records of spills.

BSEE also audits a company’s SEMS and enforces standards with penalties or by prohibiting continued operations. Additionally, BOEM created more certification requirements for operators, as well as permit requirements with stricter regulations on drilling processes (Theriot 2014).

For offshore oil, strict top-down regulation was chosen as a key mechanism to avoid future LPHC accidents. While accidents have not been eradicated, lessons are being learned and information is being shared across the industry to help improve safety outcomes (Ray 2014). Notably, one of the crucial recommendations of post-accident investigation was careful attention to ensuring that high-level safety culture filter down to an operational level (CSB 2016; DHSG 2011). BSEE in addition issued its first ever Safety Culture Policy Statement following the accident (BSEE 2013).

Still, concerns remain as to whether companies will actually follow these procedures (CSB 2016; Theriot 2014). Open issues remain about coordination between oil companies and drilling subcontractors and the establishment of a meaningful safety culture in offshore operations and the oil industry more globally. For example, both BP and Transocean knew about the dangers of borehole ‘kicks’ long before the Deepwater Horizon accident, but did not adapt safety and detection procedures (CSB 2016). Indeed, after Deepwater Horizon, other failures occurred in the Gulf of Mexico, and again, a lack of safety culture was a root cause (Meshkati et al. 2015). While authors question whether the restructuring and relabeling of MMS was a deep enough reform to provide more stringent enforcement of the relevant regulations (CSB 2016), the case illustrates the impact that LPHC events can have at the scale of an industry, and supports the notion that lack of safety culture can contribute to risk.

4.3 LPHC Risk Mitigation in the Nuclear Industry

Major incidents in the nuclear industry provide additional key insights into the importance of LPHC risk mitigation. On March 28, 1979, a partial meltdown at the Three Mile Island (TMI) Unit 2 reactor in Pennsylvania led to the worst accident in U.S. nuclear power plant history. Lack of operator training, regulatory oversight, and inadequate communication between the government, regulatory bodies, and plant manufacturers were some key contributors to the incident (Kemeny 1979). Since then, increased regulatory activity and the parallel emergence of an industry safety organization have resulted in improved inspections and safety.

The accident prompted an overhaul of the Nuclear Regulatory Commission (NRC) and triggered a long-term trend towards regulatory reform and a more cautionary approach (Sexton 2015). Changes included significant enhancements in power plant design and equipment requirements geared towards reducing risks from fires, auxiliary feedwater systems, or piping systems (Lach et al. 1994). Human performance and operator error were also addressed, with changes to operator training, staffing requirements, and emergency safety procedures. The NRC created “fitness-for-duty programs” for all employees who have access to key areas of a plant, and plants were required to immediately notify NRC of important changes in the safety conditions of a plant (NRC 2015).

The NRC created the NRC Operations Center, a central location for organization and contact between the NRC, its licensees, and other agencies for nuclear operating incidents. The Operations Center is staffed 24/7 by employees who can assess incident reports and coordinate responses. Following the TMI incident, NRC also mandated periodic safety drills involving state and local organizations, the Federal Emergency Management Agency (FEMA) and the NRC (Lach et al. 1994). Senior NRC managers now regularly inspect plants and the NRC’s resident inspector program was expanded with inspectors stationed near or at plants. Increased equipment for monitoring radiation and accident preparedness was also mandated.

In addition to internal substantive and procedural reforms at NRC, the industry also undertook voluntary reforms. In 1979, the nuclear power industry created the Institute of Nuclear Power Operations (INPO). The INPO is a nonprofit organization that complements the NRC as a nuclear power industry’s monitoring group. The INPO rose to prominence in 1988 when its inspections of Philadelphia Electric’s Peach Bottom nuclear plant revealed prevalent safety problems to the NRC, which subsequently shut down the plant.

INPO’s board of directors is made up of senior executives in the nuclear power industry, while its inspection teams are often employees of other nuclear power plants (National Commission 2011). Thus, peer-review fosters the diffusion of knowledge among industry members (Rees 2009). INPO inspects nuclear sites every 24 months and each inspection typically lasts for five to six weeks, during which the inspection team analyzes existing data from the nuclear site, visits and inspects the site, and reviews and discusses their findings (National Commission 2011, p. 236). Inspectors examine the “consistency of operations, safety-system performance, and workers’ collective radiation exposure” (National Commission 2011, p. 236). INPO’s Plant Performance Assessments include the examination of each site’s safety culture, the performance of operations, training procedures, and the plant’s designs (National Commission 2011). Following INPO’s assessment reports, plants are to respond with plans of action to address any deficiencies. INPO also assists underperforming plants by tracking their progress and arranging for additional assistance. INPO inspections are in addition to the inspections done by the NRC, nuclear insurers, and the Occupational Safety and Health Administration (OSHA). This inspection redundancy is a critical element of a safety culture.

Beyond the inspections, the industry created a useful venue to share its peer-review findings. To accomplish this task, INPO hosts a ‘CEO Conference’ with its industry members to discuss nuclear safety (National Commission 2011, p237). The INPO president privately meets with the 26 utility CEOs to present inspection results for each site (Rees 2009). The private meeting and the conference as a whole result in a “high level of peer pressure” within the industry, but also allow cooperation between industry members to assist underperforming sites (National Commission 2011, p237).

Overall, INPO has supported the nuclear industry in better tracking its safety standards, as well as improving the industry-wide operations such as plant and personnel performance, emergency response, training procedures, and radiation protection. As a result, the industry decreased radiation accident rates and the rate of automatic emergency reactor shutdowns, while improving overall plant efficiency (Rees 2009). Responses to more recent nuclear accidents have highlighted the need for a cautionary approach and safety culture, including an industry-wide recognition that effective regulation embraces the nexus between strong regulator, a self-critical industry, and public transparency (OECD 2014).

5 Discussion: Lessons for DPR Regulation

Several key lessons from the cases discussed above are of direct relevance for DPR. First and foremost, in spite of undeniably significant LPHC risks in all three industries, they each developed cultures of complacency in the absence of a major accident. As a result, reactivity is a common theme to the establishment of LPHC risk regulation systems in all three cases: structures to avoid or effectively respond to major failures were put in place only after a catastrophic event had deeply shaken public trust in the industry. Similar to today’s DPR industry, technology proponents and regulators initially were confident in a technology’s robustness or even ‘fail-safeness.’

We argue that acting now to avoid the potential erosion of public trust in DPR from a catastrophic failure could pay dividends for a technology that has an important role to play in future urban water management. A unique window of opportunity currently exists for the proponents of DPR to develop LPHC risk mitigation systems based on lessons learned in other industries, and to do so before a major mishap occurs. Two strategies appeared most effective in all three case studies: 1) establishing and nurturing an industry-wide safety culture, and 2) creating an independent auditing and self-policing organization.

5.1 Establishing Industry-Wide Safety Culture

Lack of safety culture contributed to all the major accidents described above. Yet, an industry-wide safety culture is not easily established, as it concerns routine operations at utilities, while being influenced by multiple cultural factors that reach beyond the boundaries of a specific organization or even country (Leveson 2004; Pidgeon and O'Leary 2000). The examples from the aviation and nuclear industries suggest that a long-term adaptive process has to be established to capture and learn from minor errors and deviations from standard procedures. DPR utilities could register and report such deviations on a voluntary (and non-penalized) basis, or regulators could require reporting and data sharing. Regulatory interventions must allow for stringent oversight, but also enable adaptive flexibility so the industry can continually improve safety standards (CSB 2016).

To date, the DPR industry has begun to address safety culture through mandatory certification and training system for utilities operating potable reuse plants (SWRCB 2016). This intervention is a promising first step, but arguably not sufficient to create and nurture an industry-wide safety culture in the mid-to long term. Safety science and the incidents discussed above show that even plant operators with highly standardized training and certification schemes may bend best practices or even partly ignore them during routine operations (Leveson 2004). Additional interventions and organizational reforms are thus needed to ensure that safety is put first in daily operations. The DPR industry could consider implementing a critical incident reporting system such that ‘near misses’ are reliably reported to utility leadership and shared among all industry members to allow for collective learning. This involves both top-down action and cultural change to de-stigmatize such reporting (Mahajan 2010). Utility leadership could be trained in establishing ‘high-reliability organizations’ (Laporte and Consolini 1991; Reason 2000), which emphasize safety in all of their organizational procedures. Finally – and arguably most importantly – such interventions could be combined with an industry-wide self-policing and knowledge transfer organization like INPO or CAST to formalize a mechanism for continuous safety improvements.

5.2 Creating an Independent Auditing Organization

Independent oversight matters for sustaining compliance with basic safety standards. A number of governance forms have emerged in other sectors that could be adapted to DPR, but important elements include independence, transparency, oversight capacity and accountability. For DPR, one challenge would be economies of scale associated with creating a national independent auditing organization of this magnitude with few installations.

Independence

In our case studies, even where regulatory oversight existed before an accident, it was later deemed inadequate. Institutional reforms after an accident have usually resulted in greater organizational specialization and independence for existing oversight agencies. For example, after Three Mile Island, the NRC created specific offices specializing in operations and emergency preparedness. Similarly, after Deepwater Horizon, the MMS was split into three separate entities with distinct responsibilities, crucially making the safety and environmental enforcement functions independent from leasing, revenue collection and permitting (National Commission 2011). In the water industry, auditing and oversight is now concentrated in State agencies (i.e. in the case of California within the Department of Public Health and the SWRCB). Financial resources and technical expertise in these agencies may be too limited to oversee complex wastewater purification operations. In California, NGOs in the potable reuse community currently voluntarily provide resources and expertise to support these regulatory agencies in the formulation of provisional potable reuse standards in a way that is not always free of conflicts of interest (Harris-Lovett et al. 2015).

For DPR, one option to implement a more independent auditing organization would be to develop an expert sub-group within the State Water Resources Control Board solely to audit and monitor potable reuse systems. Given budget constraints in the public sector, resources could be difficult to obtain. An alternative would be implementing a self-policing organization that generates information to assist agency decision-making, drawing lessons from INPO and CAST to ensure independence and transparency.

Governance Structure

Voluntary and collective structures like CAST or INPO can work where the interests of individual contributors align with the industry’s common goal of increasing collective safety. Effectiveness relies on the perception of direct benefits, and on maintaining a critical mass of industry participation. In an organizational design like INPO, avoiding the embarrassment of non-compliance helps drive participation. Both the aviation and nuclear cases show that having an initially purely mandatory regulatory risk mitigation system can even be less effective than one which includes voluntary elements (Kemeny 1979). It has yet to be shown, though, whether the DPR industry perceives an alignment of interests sufficiently similar to the airline or nuclear industry to motivate a long-term voluntary incident reporting and knowledge sharing mechanism.

Capacity

A crucial element for oversight organizations is sufficient capacity to effectively perform the necessary functions (Kiparsky et al. 2016). Technical, legal, communication, financial, and management skills all matter, as does the ability to deploy them effectively to achieve the agency’s mission. Naturally, developing these skills, either in-house or through voluntary resource and knowledge sharing, requires sufficient and reliable funding streams and access to the necessary expertise. Other industries have addressed capacity issues in various ways. INPO salaries are competitive with industry standards to attract technology experts to regulatory work. CAST experts can be reimbursed for their safety consulting activities through flexible contracts. In contrast, in the water sector, regulators are typically funded by state or federal governments. This guarantees a certain level of regulatory independence, but regulators must rely on ad-hoc networks and consulting projects, sometimes struggling to attract and retain key talent within the agency.

Transparency and Accountability

While INPO shows that not all aspects of a self-auditing organization need to be publicly available, transparency and accountability can increase effectiveness and avoid perception of mismanagement. The organizations running DPR systems should at a minimum be subject to regular and rigorous independent auditing of technical, managerial, and institutional aspects.

6 Conclusions

For all of its promise as an early stage water technology, DPR – like any other complex technological system - is not immune to LPHC risks. While the DPR industry has made admirable progress in developing draft regulations and technological health risk mitigation, the industry has thus far underestimated LPHC risks stemming from the human, managerial, and institutional side of technology implementation. Doing so, and in particular doing so with reference to sophisticated and ‘fail-safe’ engineered solutions, echoes patterns and rhetoric that preceded catastrophic failures in other industries.

It is in the interest of the emerging DPR industry to avoid its own Fukushima event. A drinking water system mishap could have high “signal potential,” and could easily set back public acceptance of a technology that is already struggling against consumers’ psychological barriers (the “yuck factor”), a lack of broader societal legitimacy, and the industry’s general challenges with innovation. The industry could proactively address LPHC risks upfront, by working to develop effective plans for establishing utility safety cultures and effective oversight. Examples from the aviation, nuclear, and oil industry show that such interventions do not necessarily require new layers of regulation, but can be designed in efficient, participatory, and even voluntary ways.

While potable reuse may become an increasingly important part of water supplies in many regions, it is not yet viewed as an irreplaceable element of the urban water system. With public support still fragile, the industry may be particularly vulnerable to public opposition arising from a high-profile catastrophic failure. Avoiding, or preparing for, catastrophic failure is important for DPR’s credibility and mid- to long-term viability. We argue that creating safety-enabling systems is actually in the DPR industry’s self-interest. If an effective LPHC risk management system were established, technology proponents could convey the powerful message that “we safeguard your drinking water using the same methods that keep you safe when you fly on an airplane.” The positive effect of such a message on DPR’s social legitimacy could easily justify the effort necessary to be able to deliver it with confidence.

Notes

Acknowledgements

We thank Sasha Harris-Lovett and David Sedlak for useful conceptual discussions. John Bowie provided useful research assistance. We are grateful for funding from Eawag (C.B), the Swiss National Science Foundation (Early Postdoc Mobility Grant P2BEP1_155474 to C.B.), and from NSF Grant 28139880-50542-C to the ReNUWIt Engineering Research Center.

References

  1. Allianz (2014) Global Aviation Safety Study. Allianz Global Corporate & Specialty SE, MunichGoogle Scholar
  2. Angers, S (2009) Safety in numbers - industry team recognized for improving aviation safety. http://www.boeing.com/news/frontiers/archive/2009/july/i_ca01.pdf. Accessed 19 July 2017
  3. BEA (2012) Final Report on the Accident on 1st June 2009 to the Airbus A330–203 Registered F-GZCP Operated by Air France Flight AF 447 Rio De Janeiro - Paris. BEA - Bureau d'Enquêtes et d'Analyses pour la sécurité de l'aviation civile, Le Bourget, CedexGoogle Scholar
  4. Behm, D (2013) Milwaukee marks 20 years since cryptosporidium outbreak. http://archive.jsonline.com/news/milwaukee/milwaukee-marks-20-years-since-cryptosporidium-outbreak-099dio5-201783191.html. Accessed 08 August 2017
  5. Bolong N, Ismail A, Salim MR, Matsuura T (2009) A Review of the Effects of Emerging Contaminants in Wastewater and Options for their Removal. Desalination 239(1):229–246CrossRefGoogle Scholar
  6. BSEE (2013). Safety Culture Policy Statement. U.S. Department of the Interior, 76 FR 34773Google Scholar
  7. BSEE (2016) Budget justifications and performance information fiscal year 2017. Bureau of Safety and Environmental Enforcement https://www.doi.gov/sites/doi.gov/files/uploads/FY2017_BSEE_Budget_Justification.pdf. Accessed 08 August 2017
  8. Camerer CF, Kunreuther H (1989) Decision Processes for Low Probability Events: Policy Implications. J Policy Anal Manage 8(4):565–592CrossRefGoogle Scholar
  9. CAST (2017) The commercial aviation safety team. CAST website http://www.cast-safety.org. Accessed 15 July 2017
  10. Cox S, Cox T (1991) The Structure of Employee Attitudes to Safety: A European Example. Work Stress 5(2):93–106CrossRefGoogle Scholar
  11. Crook J (2010) Regulatory Aspects of Direct Potable Reuse in California. National Water Research Institute, Fountain ValleyGoogle Scholar
  12. CSB (2016) Investigation report, executive summary, 04/12/2016 drilling rig explosion and fire at the macondo well. U.S. Chemical Safety and Hazard Investigation Board, Report No. 2010–10-I-OS http://www.csb.gov/assets/1/7/20160412_Macondo_Full_Exec_Summary.pdf. Accessed 08 August 2017
  13. DHSG (2011) Final report on the investigation of the macondo well blowout. Deepwater Horizon Study Group, UC Berkeley CA http://ccrm.berkeley.edu/pdfs_papers/bea_pdfs/DHSGFinalReport-March2011-tag.pdf. Accessed 08 August 2017Google Scholar
  14. Dowd A, Boughen N, Ashworth P, Carr-Cornish S (2011) Geothermal Technology in Australia: Investigating Social Acceptance. Energ Policy 39(10):6301–6307.  https://doi.org/10.1016/j.enpol.2011.07.029 CrossRefGoogle Scholar
  15. Elahi S (2011) Here be dragons… exploring the ‘unknown unknowns. Futures 43(2):196–201CrossRefGoogle Scholar
  16. FAA (2010) U.S. general aviation accidents, fatalities, and rates — 1938–2010. Federal Aviation Administration, FAA U.S. Civil Airmen Statistics https://www.aopa.org/about/general-aviation-statistics/general-aviation-safety-record-current-and-historic#gaaccidents. Accessed 08 August 2017
  17. Gerrity D, Pecson B, Shane Trussell R, Rhodes Trussell R (2013) Potable Reuse Treatment Trains Throughout the World. J Water Supply Res Technol 62(6):321–338CrossRefGoogle Scholar
  18. Guldenmund FW (2000) The Nature of Safety Culture: A Review of Theory and Research. Saf Sci 34(1):215–257CrossRefGoogle Scholar
  19. Harris-Lovett, S, Sedlak, D (2015) The History of Water Reuse in California. In: Sustainable Water - Challenges and Solutions from California, Allison Lassiter (ed) University of California Press, Oakland, pp 220–243Google Scholar
  20. Harris-Lovett S, Binz C, Sedlak D, Kiparsky M, Truffer B (2015) Beyond User Acceptance: A Legitimacy Framework for Potable Water Reuse in California. Environ Sci Technol 49(13):7552–7561CrossRefGoogle Scholar
  21. Kahn ME (2007) Environmental Disasters as Risk Regulation Catalysts? the Role of Bhopal, Chernobyl, Exxon Valdez, Love Canal, and Three Mile Island in Shaping US Environmental Law. J Risk Uncertain 35(1):17–43CrossRefGoogle Scholar
  22. Kahneman D, Tversky A (1979) Prospect Theory: An Analysis of Decision Under Risk. Econometrica 47(2):263–291CrossRefGoogle Scholar
  23. Kaplan S, Garrick BJ (1981) On the Quantitative Definition of Risk. Risk Anal 1(1):11–27CrossRefGoogle Scholar
  24. Kemeny, JG (1979) Report of the President's Commission on the Accident at Three Mile Islands. U.S. Government Printing Office 1979, Washington D.C.Google Scholar
  25. Kiparsky, M, Owen, D, Nylen, NG, Cosens, B, Doremus, H, Fisher, A, Christian-Smith, J, Milman, A (2016) Designing Effective Groundwater Sustainability Agencies: Criteria for Evaluation of Local Governance Options. University of California at Berkeley, Center for Law, Energy & the Environment, BerkeleyGoogle Scholar
  26. Lach D, Bolton P, Durbin N, Harty R (1994) Lessons Learned from the Three Mile Island Unit 2 Advisory Panel. Nuclear Regulatory Commission, Washington, DCCrossRefGoogle Scholar
  27. Laporte TR, Consolini PM (1991) Working in Practice but Not in Theory: Theoretical Challenges of "High-Reliability Organizations". J Public Adm Res Theory 1(1):19–48Google Scholar
  28. Leverenz HL, Tchobanoglous G, Asano T (2011) Direct Potable Reuse: A Future Imperative. Journal of Water Reuse and. Desalination 1(1):2–10Google Scholar
  29. Leveson N (2004) A New Accident Model for Engineering Safer Systems. Saf Sci 42(4):237–270CrossRefGoogle Scholar
  30. Luxhoj, JT, Coit, DW (2006) Modeling Low Probability/High Consequence Events: An Aviation Safety Risk Model. Annual Reliability and Maintainability Symposium, 2006, IEEE, pp 215–221Google Scholar
  31. Mac Kenzie WR, Hoxie NJ, Proctor ME, Gradus MS, Blair KA et al (1994) A Massive Outbreak in Milwaukee of Cryptosporidium Infection Transmitted through the Public Water Supply. N Engl J Med 331(3):161–167.  https://doi.org/10.1056/NEJM199407213310304 CrossRefGoogle Scholar
  32. Mahajan R (2010) Critical Incident Reporting and Learning. Br J Anaesth 105(1):69–75CrossRefGoogle Scholar
  33. March JG, Shapira Z (1987) Managerial Perspectives on Risk and Risk Taking. Manag Sci 33(11):1404–1418CrossRefGoogle Scholar
  34. McClelland GH, Schulze WD, Hurd B (1990) The Effect of Risk Beliefs on Property Values: A Case Study of a Hazardous Waste Site. Risk Anal 10(4):485–497CrossRefGoogle Scholar
  35. Meshkati, N, Tabibzadeh, M, Ashayei, C (2015) Lessons (un)learned in the last 5 years in offshore oil industry since the BP Deepwater Horizon accident. The Huffington Post http://www.huffingtonpost.com/najmedin-meshkati/lessons-unlearned-in-the-_b_7093818.html. Accessed 08 August 2017
  36. National Commission (2011) Deep water - the Gulf oil disaster and the future of offshore drilling. National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling http://www.iadc.org/archived-2014-osc-report/documents/DEEPWATER_ReporttothePresident_FINAL.pdf. Accessed 08 August 2017
  37. National Research Council (2012) Water Reuse: Potential for Expanding the Nation's Water Supply through Reuse of Municipal Wastewater. The National Academics Press, Washington D.CGoogle Scholar
  38. NRC (2015) Reactor Operator Licensing - Background. United States Nuclear Regulatory Commission Office of Public Affairs, Washington D.CGoogle Scholar
  39. NWRI (2013) Examining the Criteria for Direct Potable Reuse. WateReuse Research Foundation, Fountain ValleyGoogle Scholar
  40. OECD (2014) The Characteristics of an Effective Nuclear Regulator. OECD Nuclear Energy Agency, NEA, No. 7185, Issy-les-Moulineaux, FranceGoogle Scholar
  41. Ormerod KJ (2016) Illuminating Elimination: Public Perception and the Production of Potable Water Reuse. Wiley Interdiscip Rev Water 3(4):537–547CrossRefGoogle Scholar
  42. Parsons VS (2007) Searching for “unknown unknowns”. Eng Manag J 19(1):43–46CrossRefGoogle Scholar
  43. Pawson R, Wong G, Owen L (2011) Known Knowns, Known Unknowns, Unknown Unknowns: The Predicament of Evidence-Based Policy. Am J Eval 32(4):518–546CrossRefGoogle Scholar
  44. Pham JC, Kim GR, Natterman JP, Cover RM, Goeschel CA et al (2010) ReCASTing the RCA: An Improved Model for Performing Root Cause Analyses. Am J Med Qual 25(3):186–191CrossRefGoogle Scholar
  45. Pidgeon N, O'Leary M (2000) Man-made Disasters: Why Technology and Organizations (Sometimes) Fail. Saf Sci 34(1–3):15–30.  https://doi.org/10.1016/S0925-7535(00)00004-7 CrossRefGoogle Scholar
  46. Pronovost PJ, Goeschel CA, Olsen KL, Pham JC, Miller MR et al (2009) Reducing Health Care Hazards: Lessons from the Commercial Aviation Safety Team. Health Aff 28(3):479–489.  https://doi.org/10.1377/hlthaff.28.3.w479 CrossRefGoogle Scholar
  47. Ray, JR (2014) Offshore Safety and Environmental Regimes: A Post-Macando Comparative Analysis of the United States and the United Kingdom. 33 Mississippi College Law Review, 11.  https://doi.org/10.2139/ssrn.2370709
  48. Reason J (2000) Human Error: Models and Management. Br Med J 320(7237):768–770CrossRefGoogle Scholar
  49. Rees JV (2009) Hostages of Each Other: The Transformation of Nuclear Safety Since Three Mile Island. University of Chicago Press, ChicagoGoogle Scholar
  50. Rice J, Wutich A, Westerhoff P (2013) Assessment of De Facto Wastewater Reuse Across the US: Trends between 1980 and 2008. Environ Sci Technol 47(19):11099–11105CrossRefGoogle Scholar
  51. Roughton J, Mercurio J (2002) Develpoing an Effective Safety Culture. Butterworth-Heinemann, WoburnGoogle Scholar
  52. Sexton KA (2015) Crisis, Criticism, Change: Regulatory Reform in the Wake of Nuclear Accidents. Nucl Law Bull 2:35–62Google Scholar
  53. SDWA (Safe Drinking Water Act) 42 U.S.C. § 300j–4 (a) (2) (1974)Google Scholar
  54. Slovic P (1987) Perception of Risk. Science 2336(4799):280–285CrossRefGoogle Scholar
  55. SWRCB (2016) Investigation on the feasibility of developing uniform water recycling criteria for direct potable reuse - report to the legislature september 2016 - public review draft. State Water Resource Board, State of California http://www.waterboards.ca.gov/drinking_water/certlic/drinkingwater/rw_dpr_criteria.shtml. Accessed 08 August 2017
  56. Taleb, NN. (2007) The Black Swan: The Impact of the Highly Improbable, 1st edn. The Random House Publishing Group, USAGoogle Scholar
  57. Tchobanoglous H, Leverenz H, Nellor M, Crook J (2011) Direct Potable Reuse - A Path Forward. WatReuse Research Foundation, AlexandriaGoogle Scholar
  58. Theriot, S (2014) Changing Direction: How Regulatory Agencies have Responded to the Deepwater Horizon Oil Spill. LSU J Energy L Res Currents, 19Google Scholar
  59. US EPA (2004) Understanding the safe drinking water act. US EPA, Office of Water, EPA 816-F-04-030 https://www.epa.gov/sdwa/overview-safe-drinking-water-act. Accessed 08 August 2017
  60. Waller, R, and Covello, VT (1984) Low-Probability High-Consequence Risk Analysis: Issues, Methods, and Case Studies, Springer Science and Business New Media, New YorkGoogle Scholar
  61. WateReuse Association (2014) California Direct Potable Reuse Initiative - Reporting on our Progress. WateReuse Association, AlexandriaGoogle Scholar
  62. WateReuse Association (2015) Framework for Direct Potable Reuse. WateReuse Research Foundation, AlexandriaGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2017

Authors and Affiliations

  1. 1.Eawag: Swiss Federal Institute of Aquatic Science and TechnologyDubendorfSwitzerland
  2. 2.Wheeler Water InstituteUC Berkeley School of LawBerkeleyUSA
  3. 3.NSF Engineering Research Center for Re-Inventing the Nation’s Urban Water Infrastructure (ReNUWIt)BerkeleyUSA

Personalised recommendations