Just Culture: Standardizing Fire Service Accountability

Dec. 14, 2020
Improved workplace safety, including via openness from members regarding mistakes and near-misses, results when the principles of "just culture" are pursued and applied.

Just culture is an industry term used to describe a values-based accountability model that takes a look at the behaviors, systems and expectations that make up an organization. Developing a just culture requires an organization willing to look internally at its processes, beliefs and attitudes with an open mind.

Culture changes don’t happen overnight and require support at the executive level. This is a leadership initiative that requires buy-in at all levels of the organization, and any undertaking such as this must start at the top.

Developing a just culture requires a multifaceted approach to managing risk. When problems or risks that are inherent in the operations of the organization are examined, it’s crucial that a holistic approach be taken. If issues are looked at from just one point of view—human behavior, for instance—we can miss opportunities to make long-term, lasting improvements.

Knowledge, systems, safeguards

Many are familiar with James Reason’s Swiss cheese model of system accidents, where poor outcomes are a result of an alignment of failures. To combat this alignment, just culture employs a three-pronged approach to building highly reliable outcomes.

The first prong of this approach is knowledge. In other words, what training or education can be imparted onto personnel that will lead to risk avoidance? Simply knowing what the risks are and how to avoid or mitigate them can go a long way. Adequate knowledge relies on training, experience and situational awareness. Situational awareness is of particular interest in emergency services, because it can be affected by psychological, physiological and environmental stressors. Sleep deprivation, physical and mental wellness and substance abuse are a few that come to mind.

The second prong involves having good policies and procedures (systems) to guide employee decision-making. To have a system of accountability, there must be a standard by which to judge behavior. That simply can’t be accomplished without standardized processes and procedures.

The third prong employs safeguards where possible to reinforce systems. Redundancy is utilized in many high-risk industries, including aviation. Aircraft employ redundant electrical and hydraulic systems, because the cost of failure is high in both airline liability and human life. The fire service is no different in the need to safeguard members and the citizens from harm. Employing safeguards in fire service systems adds a third layer of reliability that can lead to the outcomes we seek. According to Reasons, the pursuit of safety isn’t so much about preventing isolated failures, either human or technical, as it is about making the system as robust as is practicable in the face of its human and operational hazards.

At its very core, just culture recognizes that if there is one consistent attribute of human behavior, it is that humans make mistakes. Despite our best efforts, we, as human beings, simply don’t always get it right. Understanding this requires organizations to begin to analyze how their personnel perform and make mistakes; what their safety and reporting culture is; and what systems and safeguards are in place to guide good decision-making.

Human performance

Human performance, or how we approach and solve problems, is an important aspect to understand when building a just culture. To better understand how people make mistakes, it’s helpful to understand the spheres of performance in relation to function complexity and operator experience. There are three main spheres that are noteworthy: skill-based performance, rule-based performance, and knowledge-based performance.

As personnel train on job functions and gain experience within an organization, certain fundamental skills become “automatic” and require little thought to perform—skill-based performance. For example, a apparatus driver, when leaving for an emergency incident, will remove wheel chocks, engage the battery switch, depress the ignition, buckle the seat belt and disengage the air brake. These are a set of steps that, once performed over a period of time, become routine and require little thought.

According to “Operators Guide to Human Factors in Aviation,” in rule-based performance, a person is confronted with a situation where attention must be focused on making a decision or creating a solution. However, in this case, the situation is one that’s known to the operator, who is able to respond rapidly with a known solution. In such situations, the person who is posed with the problem has mental cues as how to solve the problem. This also is known as recognition prime decision-making (RPDM). According to David Marx in “Patient Safety and the Just Culture: A Primer for Health Care Executives,” RPDM fuses two processes—situation assessment and mental stimulation—and asserts that people use situation assessment to generate a plausible course of action. To continue the apparatus driver example, let’s assume that, while responding to the incident, the driver encounters a vehicle that slows in front of the apparatus instead of pulling to the right. In an instant, the driver must revert to training and/or previous experience to slow the apparatus and maneuver to a path of safety.

In knowledge-based performance, the operator has little to no experience and must make a decision that’s based solely on requisite knowledge of the system or process. These are what Gordon Graham would call “high-risk/low-frequency events.” When such a situation emerges in the context of a complex system and under time pressure, the analytical capacity of human cognition might be surpassed quickly, and the chances for a successful outcome are seriously compromised, according to “Operators Guide to Human Factors in Aviation.” To combat these situations, an organization’s best course of action is to guard personnel against an occurrence by creating robust systems that can tolerate errors or violations, and by building in safeguards against known contributing factors.

How we make mistakes

Understanding human behavior can be difficult. Humans are dynamic and multifaceted. However, we typically make mistakes in three predictable ways. These can be summed up as human error, at-risk behavior and reckless behavior.

Human error is a slip, lapse or mistake. In this case, the person who conducts the function or task performed an action that was different from the expected norm. Per “Operators Guide to Human Factors in Aviation,” slips are actions that don’t go as planned; lapses are memory failures. These types of errors typically occur at the skill-based level because of the “automatic” nature of the tasks or actions.

On the other hand, mistakes are conscious decisions that are made where the operator might have poor situational awareness or doesn’t apply the correct solution, or rule, to the problem. These types of errors typically are seen under rule- and knowledge-based performance. The distinction can be made that under knowledge-based performance, the operator might be overwhelmed by information that leads to the mistake; under rule-based performance, the knowledge of the rule and its application become the critical factor.

The important factor here is to look into the root cause of why the error occurred. For instance, is there some knowledge, system or protocol that can be put in place to safeguard from future occurrences?

Many, if not most, organizations have a set of standing orders, policies or guidelines by which they operate. Most of this guidance is put in place by best practice, accepted standards or past experience. When people operate outside of the accepted policies and guidelines it is called “drift.” These violations are an intentional behavioral choice that increase risk, where the risk isn’t recognized or is mistakenly justified.

Employees drift for all sorts of reasons, including convenience, expediency, overconfidence and bad or overly prescriptive procedures. The real issue with at-risk behavior is that if it isn’t identified and managed through a just culture, it can lead to a normalization of deviance. Normalization of deviance occurs when a “work around,” or shortcut, to an established procedure is allowed to recur until it becomes normal practice.

Another underlying problem with normalization of deviance is that there now is a disconnect between what management believes happens on the job site and what actually goes on. According to “Applying Human Performance Improvements in an Industrial Field,” this misalignment between the two is one of the first steps in eroding an organization’s safety culture.

At-risk behavior violations can occur in the skill-, rule- and knowledge-based spheres. Violations at the skill-based level typically are routines that the operator built into daily activities (normalization of deviance). In the rule-based sphere, violations are more situational and are based on the operator’s perceived need to cut corners or to save time to get the job done (drift). Because of the overwhelming and unpredictable nature of the knowledge-based sphere, violations might be because of a desperate or instinctive action that might lead to a catastrophic outcome.

Although rare, behavior that’s deemed reckless can have an extremely negative effect on the organization. This type of behavior is associated with a blatant disregard for risk and largely is based on intent. Inappropriate behavior or persistent negativity, which doesn’t improve with coaching or counseling, are examples of reckless behavior. According to “3 Reasons to Fire an Employee Immediately,” by John Boitnott (Inc. magazine), identifying toxic employees is an essential part of success for any business, because those employees often can have a direct effect on overall morale.

Safety and reporting culture

Many organizations operate with, or at least generate a perception of, a punitive culture. In a punitive culture the employee believes that mistakes will be met with sanction or reprimand. The process largely is based on the outcome and not the root cause of the event or mistake. The paramount issue with a punitive culture is that it ultimately disincentivizes the employee to report mistakes. Even if this doesn’t describe your organization, chances are that there are areas where improvement can be made toward instituting a learning culture.

A learning culture involves fostering an environment in which employees are encouraged to prioritize safety and to self-report incidents or near-misses to move the organization forward. As noted above, this change won’t happen overnight and will be imperfect in its inception. There will be habits and bias that must be overcome and retrained to institute a just culture.

Functional issues must be addressed in a just culture. Philip Boysen, in his article, “Just Culture: A Foundation for Balanced Accountability and Patient Safety,” indicated that, while encouraging personnel to report mistakes, identify the potential for error and even stop work in acute situations, a just culture can’t be a blame-free enterprise. To promote a culture in which members learn from their mistakes, organizations must re-evaluate how their disciplinary system fits into the equation.

Disciplining employees in response to honest mistakes does little to improve overall system safety. However, mishaps that are accompanied by intoxication or malicious behavior present an obvious and valid objection to today’s call for blame-free error reporting systems, Marx noted in his book. In other words, a just culture attempts to ride the fence between a punitive and a blame-free culture.

Employees must understand that a duty is owed to the organization. These expectations clearly are laid out in the mission, values and polices of the organization. The system can tolerate a modicum of errors or drift if the employee is retrained or coached on the infraction. On the other hand, instances of repetitive or reckless behaviors might require progressive discipline, sanction or termination. The overreaching intent is for organizations to be able to respond efficiently to errors, to look for trends, and to create lasting change that provides for employee and customer safety.

Systems and safeguards

Chances are that your organization already has a host of policies and procedures that guides your employees in day-to-day operations. In essence, these are the systems that move the organization in the direction of its leader’s intent. Implementing a just culture requires you to take a look at the current systems and safeguards that are in place to determine where you are lacking and where you can improve. As noted above, this only can happen if a culture in which employees feel empowered to report mistakes and near-misses is fostered. This is the foundation of a just culture.

Many are familiar with US Airways Flight 1549 and Capt. Chesley Sullenberger’s forced water landing, which was dubbed, “The Miracle on the Hudson.” Based on flight data recorder information, Sullenberger’s aircraft was struck by birds at an altitude of 2,818 feet, which caused critical damage and a loss of power to both of the aircraft’s engines. At 15:27:13, it was reported that both engines could be heard “rolling back.” Six seconds later, Sullenberger was heard stating that he was starting the auxiliary power unit (backup power supply), which is a critical safeguard. Fourteen seconds after engine loss, Sullenberger was heard instructing his first officer to get out the Quick Reference Handbook for dual-engine loss. These are procedures, or systems, that are provided to walk pilots through in-flight emergencies. In the National Transportation Safety Board aircraft accident report, the number one contributing factor to the survivability of the accident was the decision-making of the flight crewmembers and their crew resource management during the accident sequence.

The point: It isn’t enough to have systems and safeguards in place in an organization. The personnel who run the day-to-day operations must know when, where and how to employ them successfully. Furthermore, when a near-miss or accident occurs, those same employees should feel compelled to report the incident in an effort to learn and grow the organization in the process. Implementing a just culture isn’t simply about identifying accidents and mishaps; it’s about holding employees accountable to a standard of performance and being open and transparent to a culture of safety and improvement.

Our experience

My department’s journey to implement a just culture grew out of a desire of our executive staff to make workplace safety our highest priority. As noted in the introduction, undertakings such as this are most successful when they are top-down and have executive support. Our department’s assistant chief at the time instituted the motto of “Everyone Goes Home” for the department. This applied not only to fireground and incident operations but also employee well-being. Programs, such as annual physicals, health and wellness programs and cancer prevention initiatives, were spawned and grown within the organization.

Just culture was another initiative that was aimed at improving how the department looks at and interacts with overall department mission and goals. To assist with implementation, the organization utilized a third-party firm that develops tools that operationalize all the tenants of just culture into a useable and reproducible program. Since the department implemented the just culture model a little more than three years ago, the department has seen several benefits and improvements over time. Some of these benefits include a standardized approach to investigation and decision-making, a reduction in bias, improved data on accidents and injuries and an eroding of the punitive culture perception that was held by many of the department’s employees.

Standardization and bias reduction

The standardized approach to investigation and decision-making is crucial to the success of the program. Employee confidence in the program is bolstered based on the knowledge that all investigations are handled in the same manner and with the same process. Standardization also reduces bias in the investigation process. Like the people who are involved in the incidents, investigators also are prone to bias. According to “Applying Human Performance Improvements in an Industrial Field,” biases are preconceived notions and understanding that you bring with you when you try to understand events in real time or those that are reviewed in hindsight.

Two types of bias that investigators are prone to are hindsight and outcome bias. Hindsight bias simply is the preconception that your findings in an incident should have been apparent to those who were experiencing the event in real time. Outcome bias places value judgments on an event that’s based on the perceived gravity of the outcome alone. The problem is that, despite our best efforts to remain objective, humans have a natural proclivity to observe that, when reviewing events prior to a bad outcome, they clearly could “see it coming.” We are calibrated poorly because of hindsight bias and greatly overestimate what our ability would have been to see the negative outcome and its severity before it arrives (“Applying Human Performance Improvements in an Industrial Field”).

Having a standardized approach to investigation through applied tools and processes allows the investigator to remove at least some of the bias that’s inherent in our nature.

Big data

The need for data in today’s world is paramount in running a successful organization. The ability to serve your community adequately, to take advantage of emerging technologies, and to ensure that employees work safely and effectively all hinge on the ability to collect and interpret data. Prior to my department’s implementation of just culture, it was impossible to quantify how many accidents, injuries or near-misses went unreported. Reasons for this include a complacent attitude toward risk and safety and a culture that was perceived as punitive and reactive. Since the department implemented a just culture, it has seen an uptick in reported incidents. This isn’t to say that the department is at 100 percent compliance and doesn’t have room to improve. However, the department is able to collect data on its accident reports and identify trends to determine how best to modify systems and train employees to mitigate risk.

Data really is the key to closing the loop within the just culture framework. Without follow-through, no system can have much of an effect on an organization.

Building trust

Building trust within an organization is difficult, particularly when you talk about relationships between staff and line members. I do believe that many of the department’s employees believed that they operated under a punitive culture prior to the introduction of just culture. I don’t believe that this was department leadership’s intent nor that their actions indicated such a culture. However, unfortunately, perception is reality.

Building trust requires consistency and follow-through. The department has been committed to implementing and consistently applying the just culture framework. Although this is a slow process, we believe that it is one that is worth doing. As employees gain confidence and understanding in the process, their comfort level in reporting improves. Information and lessons learned are shared through quarterly meetings in which safety officers discuss previous incidents and how to prevent future occurrences while keeping names anonymous. Over time, something that at first seemed foreign now is commonplace.

A just culture is an evolving process that never will end, but by weaving it into the fabric of the organization, you can ensure that it continues to benefit your employees and your community as a whole.

Voice Your Opinion!

To join the conversation, and become an exclusive member of Firehouse, create an account today!