Keywords

1 The Link Between Culture and Harm

Accident. Pilot Error. Medical Error. Mechanical Failure. Employee Mistake.

These are all familiar terms in the safety space. Aircraft accident; pilot error. Patient harmed; medication error. Culture itself is rarely identified in the press as the root cause of public harm. Human error and inadvertency are the hallmarks of our collective ‘safety dialogue.’

Yet, once in a while we see an individual who chooses to crash an aircraft, or who chooses to kill a patient. These we quickly distinguish. We call them ‘intentional’ and claim these acts are outside the purview of ‘safety.’ These are more a security, or criminal matter; they are not ‘safety.’ In the United States, once it’s determined that an aircraft pilot ‘intended’ to crash the airplane, the investigation is transferred from the National Transportation Safety Board (the safety investigators) to the Federal Bureau of Investigation (the criminal investigators) (NTSB, n.d.). This dialogue forces us into an uncomfortable and illogical place—that there are only two forms of human behavior: human error, and its evil twin, ‘intentionality.’ In fact, humans and their behaviors are much more nuanced than these two labels can encompass.

Most corporate adverse events have their origin in two places:

  1. 1.

    the systems we design around the humans, and

  2. 2.

    the choices of humans within those systems.

The resulting harm itself, and the human errors (slips, lapses, and mistakes) that may have caused the harm, are really two forms of outcome—outcomes to be monitored, studied, and perhaps, grieved. Systems and choices are where the action is, with culture referring to the choices made within the system.

The first origin of adverse events is system design. Systems develop over time. From simple surgical instruments to robotic surgery, from messages delivered via horseback to satellite phones, systems keep getting smarter and smarter. Collections of components, physical and human, keep getting more complex and tightly coupled, from getting steam locomotives to run on time, to organizing a mission to Mars. As system designs mature, we try to make the fit right for human beings within those systems. We do our best to design around the inescapable fallibility of human beings—that propensity to do other than what we intended. Better human factors design means less human error.

Choice is the second origin of adverse events. Design safe systems, and help employees make safe choices within those systems. Humans are not computers—we have free will (although this is sometimes challenged amongst experts in the safety field). We make choices that impact the rate of adverse events. That said, understanding human choice is messy business, often set aside in favor of the more simplistic explanation of human error. Even graduate safety courses spend little time on managing choice—it’s all about human error. A commercial truck driver who crosses the centerline of a highway may very well be said to have made a human error. Yet, both design of the highway and truck may have contributed to the error, as would natural elements like rain or glare. So too would the choices of the driver contribute to his own error—from the decision to send a text message while driving to the decision to drink and drive.

The measure of culture, whether it’s customer or employee safety, privacy (for a hospital), profit, or winning culture (for a sports team), resides within the choices of those within the context or value being discussed. Whether it is the City of New York wrestling with the problem of eight million people deviating from basic traffic laws, or a small manufacturer wrestling with personal protective equipment (PPE) compliance, culture is best thought of as the collective choices of those within the system.

2 Culture: What It’s Not

What is culture not? Culture is not human error. We are all inescapably fallible human beings. The fact that we make mistakes is just part of the human experience. We can reduce the rate by designing good systems around human beings. But we cannot totally eliminate mistakes, simply because there are so many opportunities to make mistakes. Similarly, culture is not outcome. The fact that one group of physicians might have a higher misdiagnosis rate does not necessarily mean they have a weaker safety culture. There are other factors, from patient acuity to the design of the healthcare systems, that might lead to a higher rate of misdiagnosis.

Sitting in a restaurant, we might hear a tray of glasses break when they hit the floor. For the most part, we’d assume an unfortunate human error led to the undesired outcome of broken glasses. We’d make no inferences about restaurant culture based solely upon a human error and its undesired outcome. Yet, if we walk into a restaurant and see open unclean tables, and a number of employees standing idle, we might wonder why they weren’t cleaning the tables when it appears they have the time to do so. This may lead us to think about the culture within that restaurant. We’d wonder about their service standards, or the general cleanliness of the restaurant. We might find ourselves talking about it once seated. It would shape our view of the restaurant as a whole, in a way the inadvertently dropped glasses would not. We might see the failure to rapidly clean the tables as a reflection of their service culture. It does not explain the behavior, but it simply recognizes that the apparent choice of the staff not to clean the tables is somehow reflective of the overall culture of the restaurant.

Culture is, in general, not a reflection of highly culpable or even criminal behavior. Every organization will have outlying behavior on the job, from theft to assault. In the framework of a Just Culture, these choices involve ‘knowledge or purpose’ toward the harm being caused (Outcome Engenuity, 2016). In the United States in 2016, a total of 5300 Wells Fargo employees were fired for creating unauthorized customer accounts (Egan, 2016). The incentive for the employees involved? Bonuses based upon the number of new accounts created. Given there were 5300 employees involved, could the unauthorized accounts be described as part of the ‘culture’ at Wells Fargo? The CEO of the institution was pushed out, in large part for his failure to effectively manage his team. Given its widespread occurrence, theft might very well be seen as part of their culture. That said, highly culpable actions tend to be statistical outliers more than any reflection of corporate culture.

There is, in the Just Culture model, a zone of behavior less culpable than knowledge or purpose called ‘recklessness’ (Outcome Engenuity, 2016). This is not the intention to cause harm, but rather a ‘gambling’ with unreasonable risk. A driver who texts and drives might be seen as gambling with the lives of others on the road. What makes it ‘reckless’ is the determination that the driver recognized the risk as both substantial and unjustifiable, but chose to text and drive because it benefitted him in some way. Reckless, by definition, is not a choice to harm. Rather, it is a choice to gamble, to knowingly take a substantial and unjustifiable risk (ibid.). In the manufacturing environment, this might be to climb to dangerous heights without safety gear, simply to save time. If employees know they are taking an unjustifiable risk, that behavior might be deemed reckless.

Recklessness, along with knowledge and purpose to harm, are generally the conduct of outliers within the organization. They are commonly addressed through a formal process of corrective or disciplinary action. Outliers will always exist. They are not, however, the core of culture.

3 Culture as At-Risk Behavior

‘At-risk’ behavior is the conduct where individuals or groups engage in a risky choice not knowing or incorrectly justifying the behavior as being safe (ibid.). This might be a group of drivers who routinely fail to indicate a lane change, or a group of nurses who routinely fail to wash their hands walking into a patient’s room. ‘Drift’ is a word appropriate to describe at-risk behavior. There are many reasons for behavioral drift. The human does not easily see the hazard to be avoided by adherence to a safety rule. Or, the incentives in the system encourage deviation from a safety rule in order to meet a production objective. It is the presence of ‘at-risk’ behavior that is the best indicator of what we call ‘safety culture.’

As humans, we will exhibit collective choices around particular values. Aviation is known to be a relatively safe endeavor for a passenger, and a ‘highly reliable organization’ to experts in the safety space (Stralen, n.d.). Yet, it is a dangerous place to work for employees, with a higher lost-workday injury rate than coal miners and commercial fishermen (BLS, 2017). Is it the inherent danger of the work environment that makes the difference? Is it the system design that makes aviation much safer for passengers than employees? Or is it culture, the collective choices of airline employees, that makes the difference?

Culture can be seen as the characterization of a group’s collective choices. A safety culture is one where the value of safety is strongly supported. A profit centric culture is one where profit maximization is strongly supported. For a military unit, mission may be the dominant value, even when it means putting a service member in harm’s way. If a group’s choices are generally aligned with protecting safety, we’d say they have a strong safety culture. If they are not, if there is at-risk behavior throughout the organization, we’d say they have a weak safety culture. This characterization does nothing to solve the problem, but merely suggests that the system is not working as intended. Employees have drifted into risky choices, and it’s threatening a value held by the organization, or society as a whole.

If at-risk behavior is the marker of what we call ‘culture,’ it is independent of whether those behaviors led, on any day, to no harm, minor harm, or a major accident. Those of us who don’t walk around the back of our car before getting in will likely never back over an unseen child. These events are rare, making the prevention of harm seemingly tolerant of the at-risk behavior of not walking around the car before backing up. That said, in the U.S. alone, automobile drivers inadvertently back over 2500 kids each year, killing 100 of them (Kids & Cars, n.d.). A workplace (or individual) safety culture may be ‘poor’ in the sense that the choices of employees are statistically linked to a higher rate of undesired outcomes.

If we are pursuing highly reliable outcomes, choices matter, even when we humans do not necessarily see the hazard attached to non-compliance. Culture can be seen as the degree to which human beings will, through their choices, be protective of a shared value. This often appears as the ‘extra effort’ it takes to act in protection of a value in the face of a belief that potential harm is uncertain, delayed, or will simply happen to someone else. For example, in the U.S., hospital acquired infections account for 100,000 lost lives a year (CDC, 2016). The number one thing that can be done to prevent these infections is for hospital employees to wash their hands going in and out of a patient’s room. Yet, most hospitals have been working for decades to get their compliance rates to even 90% (McGuckin, Waterman, & Govednik, 2009). Hospitals continuously train their employees, redesign soap and alcohol rub dispensers, and make hand hygiene a discussion in daily huddles. All that said, hand hygiene takes extra effort for physicians and nurses, adding roughly 30s to the time in a patient’s room, multiplied by thousands of patients over the course of a career. How willing hospital employees are to perform this task is one marker of a hospital’s overall safety culture.

4 The Importance of Why

There are views within the academic community that culture is more than choice. In this view, culture is more a description of values and beliefs. There is no reason to challenge this view. The values and beliefs of employees within an organization surely impact their choices; but it is not only values and beliefs that impact choice. We go to a tennis match and we are quiet; we go to a soccer match and we are loud. For most of us, there is no deeply held value or belief that tennis matches should be quiet and soccer matches loud. It is custom, tradition, or culture. We remain silent at the tennis match because others are silent, and because we’ll face some admonition from those nearby if we choose to scream. Likewise, if we remain silent during the thrilling parts of the soccer match, a fellow fan might suggest we get on our feet and start to yell like the rest of the crowd. Sometimes, it is simply fear of being different that causes us to behave in a particular way. The choice to remain silent at the tennis match may have nothing to do with our personal values and beliefs. We may actually be wondering why others do not cheer for their favorite player. Yes, values and beliefs are important, but they are not the only factors impacting group choices.

For every risky choice, there is a unique set of factors that come into play. It is oversimplification to suggest that all unsafe choices emanate from some shared set of values and beliefs. Those with the task of creating safe behaviors are well advised to try to understand why employees drift into the risky choice. In some cases, the unsafe behavior might occur simply because the employee does not agree that the safety rule is important enough to follow. In other cases, the root of unsafe choice may come from decades of values and beliefs, such as a male pilot who might choose not to communicate critical safety information to a female copilot. It could also be the case that employees choose a behavior simply to avoid sanction. Healthcare privacy laws were enacted with tough sanctions for those healthcare providers who go into a patient’s record when there is no clinical reason to be there (HIPAA, 2013). Within hospitals, we saw the policies shift, as well as behaviors. Did the values and beliefs of healthcare providers change overnight? No. It took time; and for a few diehard voyeuristic staff, those values and beliefs never changed. Just as humans gawk (slow down) as they pass by an accident on the road, the desire to see into a movie star’s patient record did not likely shift much through the creation of privacy laws. Did the culture change? Yes, if the culture is what we do. No, if the culture is seen as values and beliefs.

5 Improving Culture

As managers and systems designers, we can influence culture. Engage a loud buzzer in a car when a seatbelt is not latched, and drivers will indeed buckle up more frequently. We can shape the choices of human beings, at least at a statistical level. The entire criminal justice system is based upon this premise, as is every human resource policy within an organization (Florida Government). Humans make choices; system designers are out to influence the choices they make, just as marketing companies are out to influence which laundry detergent we buy.

In healthcare, organizations are working hard to create learning cultures where employees can self-report their errors for the purpose of organizational learning. For most hospital staff, this behavior is very much aligned with their individual value of protecting the safety of their patients. Yet many, if not most, employees report only what they cannot hide. The U.S. Agency for Healthcare Research and Quality’s Patient Safety Survey routinely finds that fear of punishment is the reason most don’t report errors or near misses (AHRQ, n.d.). This is called a ‘punitive’ culture, not because punishment is among the shared values of the staff, but because employees believe that organizational leaders see punishment as a reasonable tool for controlling staff errors. The failure of employees to report errors and hazards is real. The cause is either apathy that nothing will change, or fear that they will be punished for bringing risks to light.

To be effective managers, we should recognize that human beings are, at our core, hazard and threat avoiders. We speed on the road. We see the speed limit sign, which represents the rule, and we keep going. We see a police car parked up ahead, and we slow down. The police car represents an immediate threat; the speed limit sign does not. Yet, in the organizational space, we write safety rules with the expectation that human beings will somehow blindly follow the rules simply because they are safety rules. When human beings inevitably drift, we claim ‘poor safety culture.’ A recent U.S. governmental report on an aircraft accident characterized the offending organization as having a ‘culture of complacency’ (Loreno, 2016). It’s easy to attach the label of poor culture; it’s a bit harder to understand how mission-oriented employees are reacting to the world around them.

In order to shift culture, to shift choices, it is good to know the reasons behind the behavioral drift. If a task is hard to perform, or gets in the way of the mission, an employee might feel pushed toward non-compliance. If deviation from a safety rule is easy, or if deviation optimizes the mission, an employee might feel the pull of non-compliance. This is particularly true where employees have a hard time connecting the desired safety behavior to the undesired outcome. Many safety behaviors are obvious to the employee involved. Wearing eye protection when using a grinder makes sense because the risk of non-compliance is obvious to employees. Yet, when events become increasingly rare, humans will soon recognize that non-compliance often yields no undesired outcome. We see others deviate from the safety rule, with no bad outcome. Consider our collective inattentiveness to listening to the pre-flight briefing, in large part because we believe that it is unlikely we will ever need to use those safety instructions. We humans shed load we do not see as essential to largely mission-focused work. We ignore the safety briefing on the airplane simply because we want to get on with reading the magazine in our hands.

Top managers have a large influence on culture. By role modeling, mentoring, and coaching their direct reports, they drive the commitment the organization has toward protecting a value like safety. Conversely, top managers can kill a strong safety culture by their actions. Maybe it’s the CEO of a railroad who wants to drive the train when he is unqualified, or a director of a manufacturing facility who chooses not to wear a safety helmet and glasses when required. Top managers set the expectation of safety, and through their behaviors, model what a culture of safety looks like.

In order for organizations to improve their safety culture, leaders must be willing to take the lead. They must role model, mentor, and coach their direct reports in a manner that says a little extra effort is worth it. They must be continually cognizant of the role of system design in shaping behavior. They must be cognizant of external cultural norms slipping into the organization, from hierarchical traditions to perceived gender roles.

Line managers must do the same. They must be role models, mentors, and coaches in a manner demonstrating that the extra effort is worth it. The mission never goes away—every employee has production goals. Yet, every organization can and should let its employees know what it means to be protective of a shared value; from putting on protective gear, to taking the time to lock and tag out electrical systems that might endanger an employee.

Culture is not easy because we humans are complex. We are goal oriented. We pursue our missions with zeal, and we find creative ways to do this even when faced with fewer resources, and less time. Cutting corners to get things done is part of the human spirit. Across human endeavors, we shed what we see as the unnecessary rules and guidance mandated by those in control. Even academics have the problem of staying within font size rules when presenting their findings, because they believe the ability to present more is more important than presenting in a legible manner.

It’s just who we are. And that’s why culture is so hard. In a strong safety culture, the group will hold each other accountable for conforming to the behaviors that support safety. This will hold true even in the face of the generally held belief that potential harm is uncertain, delayed, or will simply happen to someone else. Safety is about preventing harm. Safety culture is about choice.

6 Tangible Steps

Creating a strong safety culture means helping employees make good, safe choices. To do that, we first must clearly articulate to our teams both the mission, and the many values we work to protect. For safety, we need to let our employees know where safety fits into the mix, both in theory, and in real world role modeling. Next, we must design our systems and processes to facilitate the choices we want to see. Human choices are somewhat predictable—meaning the system design process can anticipate and resolve conflicts before we introduce system or procedural changes. After that, we are left with the everyday task of role modeling, mentoring, and coaching so that our employees understand how they are to make choices around the safety value, given a world of conflict between the mission and the many disparate values we hold. And lastly, we need to have systems in place to monitor our performance. Are we making choices that are supportive of our shared values?