Keywords

1 Introduction

The major industrial accidents of the 20th and 21st centuries have been the source of a variety of interpretations. Technological causes initially held the top spot. However, it was subsequently agreed that human factors were a major cause of disaster in these complex technological systems leading, ultimately, to the idea that the causes of accidents could be found at the organizational level. Here, we do not offer an exhaustive overview of work that has modernised performance factors at the organizational level in at-risk industries; instead we present arguments from the sociology of organizations in order to better understand how the day-to-day work of organizations complements or substitutes what is prescribed, either to adapt to operational necessities, or in an emergency. In other words, is post-accident management possible based on “safety in action”, which finds its foundations the negotiation of prescribed regulations, what de Terssac [1] describes as a consequence of social regulation [2].

The arguments used in this chapter are based on a reinterpretation of major industrial accidents in terms of the sociology of organizations; in particular we aim to establish bridges between knowledge of the organization’s operations, and the restructuring of organizational ecosystems during the management of a crisis. We argue that modes of social regulation that enable prescriptive orders to be adapted to the daily work of organizations can play a positive role in the capacity of systems to anticipate and adapt, which in turn creates resilience.

This paper begins with a brief review of some major industrial accidents in order to highlight the main phases of research in the social sciences. It discusses the contribution of the sociology of organizations, particularly the French school of strategic analysis and social regulation, which examines in fine the role of social regulation in understanding both operational systems and the post-accident period.

2 Major Industrial Accidents and Changing Paradigms

The major accidents that have occurred over the past four decades have changed the research paradigms used in risk management. They have influenced industry practice both in terms of analytical tools and management culture. The engineering culture that dominated safety decisions opened a door to the humanities and led to the development of cross-cutting approaches that could address system complexity. This section presents a brief history of this evolution.

The industrial accident at Three Mile Island (TMI) was the origin for a profound examination of the organizational dimension of accidents (although it did not lead to work on prescriptive organizational design). Perrow [3] describes complex systems with a high potential for disaster and highlights the systemic dimension of accidents in tightly coupled systems where trivial errors can interact and lead to an unwanted event. However, according to Perrow [3] these systems only concern the ‘normal’ accident. This unacceptable sociological approach, in a society where risk management is a corollary to technology, was nevertheless, the starting point for the growing interest of sociologists in at-risk organizations.

This appeal to the sociologists of organizations would be reiterated by Reason [4]. Having observed the limits of engineering and cognitive science in understanding the Chernobyl accident, he used theories from sociology in order to understand and track the latent errors that hide at all levels of the system and (using the cancer model), interact with one last operator, resulting in disaster. Reason’s well-known ‘Swiss Cheese’ model would lead to the development of many audit methods that aimed to detect weaknesses in the system. The Tripod method [5] is one example.

Moreover, the Chernobyl accident was the origin of the concept of safety culture [6] and would lead to further work on its definition in both high-risk organizations and industry in general. The importance of the safety culture concept would be widely discussed and the source of many industrial initiatives. This was the case in France, where the creation of the Institut pour une Culture de Sécurité Industrielle [Institute for an Industrial Safety Culture] followed the AZF accident on 21 September 2001.

These wide-ranging conceptual developments, which attempted to limit major disasters, are marked by the creation of methodologies for the observation of high operational reliability in organizations, in order to understand their characteristics and eventually design prescriptive operational principles. The emergence of High Reliability Organizations (HRO) [7] in the 1990s was a major advance on Perrow’s work and the fatalistic vision of the ‘normal’ accident. However, despite an unprecedented observation methodology, researchers themselves were forced to admit that it was not possible to develop a theory of HROs, although they constitute an important set of case studies on high-risk, high reliability organizations. Nevertheless, this work has served as the basis for many industrial studies by organizations that want to change and improve their level of safety culture, for example in the oil sector.

In the 2000s, resilience engineering would once again change perceptions of safety systems. Hollnagel [8] argued for the understanding of the day-to-day operation of systems, through the study of system successes rather than failures. This understanding of the capacity of a system to anticipate an accident and to react to adverse events constituted an important development in the management of at-risk systems and major accidents.

The aim of these various currents of research was to provide a better understanding of at-risk systems during both routine operations and in times of crisis. The work of sociologists would lead to the emergence of established concepts from the sociology of organizations. The next section presents a summary of French research, in particular the school of social regulation, which emphasizes the negotiated dimension of safety systems.

3 The Sociology of Organizations and at-Risk Industries

In the late 1990s, Bourrier [9] carried out a study of American and French nuclear plants. This study would conclude that, far from being HROs, nuclear power plants were normal organizations, given what was known about the sociology of organizations. Specifically, normal organizations are the result of the negotiations and strategies undertaken by their actors. Such organizations may be the source of virtuous ecosystems, although their managers may not be aware of it.

We also found, in our study of the decommissioning of a nuclear plant, that the plant’s informal organization may be relevant driver of safety [10].

In the French school of the sociology of organizations, this dimension of an organization that does not fully meet the prescribed, formal requirements of managers is well-known. Crozier and Friedberg developed and demonstrated a theory concerning the strategies of actors and power relations in organizations [11]. The work of de Terssac [1], particularly following the explosion of the AZF factory in France, refers to the negotiated dimension of safety in relation to social regulation theory . This theory argues that rules can be revisited and that they are the result of negotiations between actors. They structure collective action, while independent initiatives can create conflict with external controls. How the system is regulated becomes the result of compromise and negotiations between these two forms of regulation.

de Terssac [1] highlights the development of everyday safety in a factory, beginning with negotiations between workers and supervisors. He clarifies what he calls “safety manufacture”, which does not depend on prescriptive procedures that explain what safe behaviour is, but is the result of rules that are supplemented and negotiated by users. For the author, “safety in action” is the ability to decide whether (or not) to apply a safety rule and adapt it to the context. Different actors in the organization will have different ideas of safety that are linked to their role in the company. Safety culture results from the comparison of these different ideas.

An at-risk organization is not therefore fundamentally different to a normal organization, although it has its own characteristics. The study of such organizations simply considers that during normal operations, what is prescribed has been negotiated and adapted to the situation on the ground, and that these adjustments are part of the daily life of the organization.

It therefore seems appropriate to ask whether maintaining this shared safety culture after a major accident is an element of system resilience. Specifically, do negotiated rules make the organisation better able to anticipate and adapt or, on the contrary, must the organization resort to extremely strict procedures to manage a major disaster?

The Fukushima Daiichi accident required rules to be adapted to the realities of the situation regardless of the procedures to be followed in an emergency. The next section highlights the decisions taken by the plant’s Director in the application of the venting procedure and cooling the reactors with seawater. We show that in a post-accident situation, assessments of procedures are a function of the context, notably with respect to the positions occupied by actors.

4 Following the Rules, Post-Accident

In a crisis, where nothing corresponds to any previous situation, it seems foolish to guide behaviour with reference to known procedures. The management of the Fukushima Daiichi crisis showed that certain actions taken by the plant’s crisis unit and its Director were taken in the light of their knowledge of the status of the system—and that their understanding was different to that of governmental authorities and TEPCO [12]. During the hearings that followed the accident, the plant’s Director stated that technical problems were encountered during the venting procedure that even he was not able to grasp because the crisis unit was too far away from where the action was happening. Therefore, he initially tried to follow instructions from headquarters, given the difficulty of the situation and delays in executing procedures.

Yes, but at that moment, it was the first time for me as well that I found myself confronted with such a situation, and, to be very honest, I didn’t even understand it myself. We didn’t yet know the details of the situation on the ground. And in that, we were in the same position as the people at headquarters. Of course, on the ground, they couldn’t see the indicators in the control room any more – they were in the dark, all the main instruments were off, but we were under the impression that if they were set to vent, this could happen. Of course, there was no electrical power supply, or air supply, but bizarrely, we were completely convinced that in order to vent all we had to do was open a valve, that if we could open this valve, it would work. We only understood afterwards. The AOV had no air. Naturally, the, MOV did not work either. We wondered if we could do it manually. But there was too much radioactivity for us to go in. And that’s where we finally realized how difficult it was. But we could not get the message across to the head office or Tokyo, get them to see how difficult this venting was [12].

Although the order to vent would be repeated by the government, it would be repeatedly delayed because the levels of radioactivity made it impossible to access the valves. The Director then realised the differences between the people at head office and the situation on the ground, and that the order could not be executed. He therefore adapted the procedure, taking into account the state of the system at the time. Later in the hearing, he spoke of the distance that was created between headquarters and plant staff. The same problem also existed at the plant itself—between the crisis unit, the control centre and shift teams who had to manually carry out the venting and who would be exposed to the high levels of radioactivity. It was this distance that led the Director and his team to take important decisions without the approval or authorization of headquarters. These actions included the decision to cool reactors with seawater.

The hearing indicates that preparations were carried out much further upstream than the strict chronology of events would suggest. Knowledge of the system status necessitated the use of a cooling source that was available in large quantities. The only option was the on-site seawater. Independent of any discussions with headquarters, the plant’s staff prepared to execute the order.

Here, it’s not really a case of ‘continue’. To be really precise, we began preparations for this seawater injection well before 2:54 p.m. This means that the order to prepare the injection was given well before then. But it was at that time that the preparations were completed and the injection became possible. This is why I gave this order, which was more like an order to implement that an order to prepare, if I remember correctly. Except, this is when the explosion occurred. We could not move to implementation and we ended up back at the beginning. What is clear is that the order to look at how to inject seawater was given at an earlier stage. [12, p. 169].

While TEPCO’s management were aware of the intentions of the plant’s Director and the crisis unit, they did not take part in any discussions or decisions about pumping procedures or water transport. Only on-the-ground personnel knew what resources were available and how to adapt them to the situation. Furthermore, after an initial attempt, the order was given to suspend the manoeuvre; the Director decided to continue, but did not reveal his decision to headquarters.

So we had ended the test and we were going to stop. It had been decided to stop. It was only me, arriving at this point, I had no intention of stopping the injection of water. Furthermore, they were talking about stopping, but we didn’t even know how long it would go on for. They could have said thirty minutes, or more. But stopping with no guarantee of recovery. For me, there was no question of following such an order. I decided to do it my way. So I announced to the people at the crisis table that we would stop, but I quietly took the ‘safety’ group leader to one side, XXXXX, which was in charge of the injection and I told him that I was going to announce to anyone who would listen that we would stop the injection, but that he, at all costs, must not stop sending water. Then I prepared a report for headquarters to say that we’d stopped [12, p. 188].

This manoeuvre was also hidden from certain members of the crisis unit. This suggests that amongst the network of actors in the field, there were some who would execute orders from the Director, which were not in line with the instructions issued by headquarters. This indicates that the internal authority of the Director was such that members of the safety group would follow his orders rather than instructions from headquarters.

The procedure implemented at this time was therefore based on the capacity to find technical solutions in an emergency situation and networks of actors who shared the Director’s beliefs. These networks of actors were responsible for the production of the rules that were applied at the time.

In a crisis, social regulation takes places in compressed time; it is the result of negotiations between headquarters, supervisory and government authorities and independent regulators. The decisions of the Director could only be translated into action with the consent of his team, through a process of negotiation. This is reflected in both the venting procedure (that would be delayed several times for technical and human reasons) and the decision to inject seawater, which was the subject of an internal search for technical solutions and led to the decision to carry on with the action against the orders from headquarters.

5 Discussion

From the perspective of the sociology of organizations, the reinterpretation of major accidents and particularly the accident at the Fukushima Daiichi nuclear power plant leads to questions about respect for rules and procedure in crisis management. We argue that a crisis should not cause the strict application of control regulations that are the result of procedures that were established in advance. Decision-making and the rules that apply should be the result of negotiations between decisions taken by headquarters and independent, on-the-ground regulation that takes into account the context.

An analysis of the in-depth feedback from the Fukushima Daiichi accident suggests that the capacity of the plant’s teams to find new solutions to deal with the various problems is wholly characteristic of the HRO as described by Weick and Sutcliffe [13]. In other words, a such organization is able to identify and anticipate failure, overcome a priori assumptions, and comply with (or defer to) authority and expertise based on experience and intuition. While all of this may be true, it also seems necessary to understand the negotiation processes and power relations internal and external to the group in order to understand its actions. We argue that the social regulation dimension in a constrained timeframe exists, and is the result of negotiations that enable collective action.

Moreover, it appears that there was a significant bias in the analysis of a decision that was temporarily successful. de Terssac’s [1] safety paradox states that it is possible to act safely and still not avoid disaster. This leads us to believe, given the limited rationality of actors, that rules that are negotiated in periods of normal operation or crisis may also lead to disaster (which was the case for the AZF accident in particular).

6 Conclusions

The aftermath of accidents does not prevent social regulation processes, which appear to be constrained by time and the emergency. Negotiations between actors occur despite conflicting interests and value systems—in this case, protecting the population, making decisions in line with international expectations, and protecting equipment and the workforce.

All of these interests are the subject of negotiations that create cooperation (or in some cases conflict) between actors in the system. We are therefore far from the situation where safety in a crisis is governed by universal basic procedures, or the intervention of a providential hero. The resilience capacity of a system is based on its capacity to adapt, and therefore knowledge of the dynamics governing the relationships between its actors.