Systemic Analysis – The Key to Effective Risk Mitigation

By: John Westphal, Senior Advisor

The achilles heel of high consequence industries I have worked in over the years is the over reaction to a single event. In other words, we allow one event to drive systemic changes throughout the organization when that one event may not necessarily tell us enough about the risk that existed within that particular socio-technical system, or the learning system lacked the sophistication to offer a robust enough view of the risk within the identified system.

The question then becomes, which one is it, the sophistication of the learning system or the insignificance of the event? Generally, the failure resides in the learning systems capability to fully explore the richness of the single event.  Failing to combine that learning with other event investigations and developing an accurate view of the inherent and system risk existing within the operation.

The described failure within the learning system generally occurs because of two reasons. First, our single event investigation struggles to define the appropriate cause and effect relationships that existed within the event. Often we insert non-causal data or non-duties into the event causing such significant noise we are unable to articulate the actual risk. Secondly, due to this failure in our single event investigations, we become limited in our ability to do precise common cause failure analysis when looking across multiple events. Simply put it becomes a “garbage in/garbage out” learning system.

In addition to the stated failures above, our learning systems often fail to correctly identify causes related to a class of events. In other words, the causal factor is not relevant to what occurred yesterday but is relevant to what may happen tomorrow or six months from now. A great example of this was played out in the movie, “Flight,” starring Denzel Washington. In the movie, Washington plays an airline captain with a significant drug and alcohol problem.  In an aircraft emergency, Washington engages in extraordinary actions saving hundreds of lives on board even though he was intoxicated while flying the aircraft. The question becomes, was his intoxication causal in regards to the loss of the aircraft? No. It was a mechanical failure that caused the loss of the aircraft. However, does a pilot who is willing to fly under the influence of alcohol and drugs represent a risk to the system? Yes!

At the end of the day our learning systems must exhibit three layers of examination. First, we must be able to identify the relevant cause and effect relationships within a single event. Secondly, once the cause and effect relationships are identified within these single events, we conduct the proper systemic analysis to identify risk mitigation strategies. A general rule of thumb is 70% to 80% of our interventions should come from systemic analysis. Third, our learning system must also assess causal factors related to the class of event. Working to move from the reactive realm to proactive and eventually a predictive learning system.

The Just Culture Organizational Benchmark™ Survey

The Organizational Benchmark™ Survey is designed to measure critical behavioral markers that show an organization’s growth in culture around a particular organizational value, such as safety, privacy, compassion or cost control. The markers are the same for each value (safety or privacy) in that the basic elements of a learning and just culture are common.












The markers follow twelve areas of focus:

  1. Organizational Values
  2. System Design
  3. Management/Subordinate Coaching
  4. Peer/Peer Coaching
  5. Outcomes
  6. Open Reporting
  7. Search for Causes
  8. Internal Transparency
  9. Response to Human Error
  10. Response to Reckless Behavior
  11. Severity Bias
  12. Equity

An explanation of the 12 benchmarks:

1. Organizational Values

In this benchmark area, we ask employees if they believe their manager’s behaviors demonstrate that the particular value is supported by the organization. This provides a high-level view of how employees are interpreting their manager’s behaviors attached to a particular value.

2. System Design

In this benchmark area, we ask employees if they see systems being changed in response to adverse events and hazards identified by the employee group. This focus on system design is a key operational tool.

3. Management/Subordinate Coaching

In this benchmark area, we ask employees if they see their managers coaching when staff members make risky behavioral choices tied to the value being analyzed. Knowing that employees will drift into at-risk behaviors, this marker tells us whether managers are coaching employees onto better behavioral choices.

4. Peer/Peer Coaching

In this benchmark area, we ask if employees are willing to coach each other. This marker goes beyond merely offering help to another employee. We ask if employees are willing to challenge the behavioral choice of a peer that they see making risky choices.

5. Outcomes

In this benchmark area, we ask employees if they see outcomes tied to a particular value heading in the right direction (increasing or decreasing). This will assess employee perceptions of whether they believe organizational outcomes are improving. This is especially important where adverse events are hard to track in a quantitative manner (e.g., compassion).

6. Open Reporting

In this benchmark area, we ask employees if they are willing to report hazards or near misses that might detrimentally impact a particular organizational value. As opposed to reporting of adverse events, this behavioral marker looks at the near miss or hazard as the precursor to harm. Open reporting is essential to create a learning culture.

7. Search for Causes

In this benchmark area, we ask employees if they see managers investigating system precursors to potential harm. We focus on near misses that, if investigated and understood, would produce critical system learning.

8. Internal Transparency

In this benchmark area, we ask employees if they observe open dialogue concerning adverse events and lessons learned as related to the value under analysis.

9. Response to Human Error

In this benchmark area, we ask employees if they see employees being disciplined for inadvertent human errors. This marker ties directly to the Just Culture model for the proper response to human error.

10. Response to Reckless Behavior

In this benchmark area, we ask employees if disciplinary action is taken when an employee willfully chooses to recklessly endanger the value under analysis. This also ties directly to the Just Culture model in the response to reckless behavior.

11. Severity Bias

In this benchmark area, we ask employees if they believe that the severity of event outcomes play a significant role in whether the event will lead to positive change in systems or processes.

12. Equity

In this benchmark area, we ask employees if they believe that they are treated fairly across employee groups. Equity, the belief in the system being fairly applied across employees, is central to the notion of a Just Culture.

Why the ‘5 Why’s’ are not enough for a good investigation


By: John Westphal


Event Investigation is a tool within the reactive learning system that we use to extract learning from an event. One of the practices I have employed as a six sigma black belt and human factors investigator is the '5 Whys' technique. It is used in the Analyze phase of the 'Six Sigma DMAIC' (Define, Measure, Analyze, Improve, and Control) methodology, and is a technique that seeks to identify the root cause of an event.

That said, as organizational leaders, I believe we have become overly captivated with the identification of the root cause. Many organizations have fallen into the trap of believing if they find the root cause, they have in essence found the piece to address for further mitigation of the event risk. This, in my experience, leads us down a path of fixing one event at a time rather than addressing common cause failures, which are often further up the causal chain and allow us the opportunity to address risk in a more holistic fashion.

The '5 Whys' is a simple methodology that allows us a basic understanding of the initiating event (root cause), but as we seek to extract all the learning from the event for more effective risk mitigation, we must employ additional methodologies (Rules of causation) to help us understand the cause and effect relationship, mitigate the use of negative descriptors that are subjective in nature, identify and explain the human errors and at-risk behaviors within the event, and lastly, only allow causal factors that had a preexisting duty to act.

It is from the place of a more sophisticated event analysis that we can filter the noise around the event and better understand the role of human error, at-risk behavior, mechanical failure, and environmental/cultural conditions that increased the likelihood of the event. Armed with a more sophisticated approach to reactive learning, we now have the opportunity to classify the failure. In other words, was this design failure, component failure, or unique failure? Depending on which we see dictates the response to the event, thus safeguarding the organization from overreacting to single events.

Now that we have the failure classified with a good understanding of the direct and probabilistic causal links within the event, we are in a better position to conduct the systemic analysis across multiple events, searching for the common cause failures.

It is at this point we have now converted our reactive learning system to a proactive learning system, breaking causal chains across multiple events, decreasing the risk throughout our operational environment.

We can see that, although the '5 Why’s' is a simple tool that allows us a very basic understanding of an event, it is simply not enough for a good investigation that makes it possible to convert the learning system from reactive to proactive and eventually predictive in nature.

For more on event investigation, see our Live Courses information page or our Online Course information page.

Frequently Asked Questions

Where do I start with Just Culture?

We suggest starting at the top; you’ll need the buy-in of the executive team in order to effectively make the changes needed for implementation. Explain to them the core concepts of Just Culture, the different, better way to do business. Show them the outcome bias and how destructive it can be. Then you can work your way down to local managers to show how a Just Culture works.

What does JC implementation look like?

Implementation usually begins with learning how the three behaviors work and how the quality of each choice can make a difference. Getting past the outcome bias is also a big early step. Then you’ll revise organizational policies and procedures, beginning the system design phase. Training your managers and staff on the concepts and on peer-to-peer coaching will begin to build a learning culture in your organization. And once you get buy-in across the organization, your people will begin to make the right choices and your processes will steadily improve.

How do we maintain momentum once we start the process?

Getting the Just Culture ball rolling will take leaders role modeling and interacting with subordinates and peers. You’ll also want to give regular reminders and use any and all examples that come up to show how it’s working. Periodic training and practicing with fictitious scenarios can also help an organization pursue a learning culture.

How important is getting leadership aligned?

Without the support of upper leadership, a Just Culture implementation will always be treading on thin ice, afraid that the effort could be suspended or cut entirely. In order to commit the organization to the change, top-level buy-in is essential. This is not to say that they must drive the implementation, though; leadership can come from any level.

Is on-site training available?

Absolutely; in fact, most of the training we do is on-site. Our advisors travel quite a bit to visit our clients to do training, and some clients have Certified Champions who do the same. We also have several options for online training available.

How does Just Cause work with the Just Culture Algorithm?

Just Culture and Just Cause fit together beautifully. The focus of Just Cause is the procedural aspect of justice; it spells out the rights and requirements of the law. Just Culture aligns with the substantive aspect; it spells out how to define what crime is and how to handle justice on both the personal and organizational levels. So the older Just Cause concepts and the newer Just Culture concepts complement each other quite well.

How many people from my organization would you recommend getting certified?

The answer to this question, which we get a lot, is very dependent on your organization. We recommend that you have representation from your operational leaders, your HR team, and your quality and safety personnel. How many employees and managers you have and how much culture change will be required will dictate how many Champions it will take to best serve your needs.

What are some methods for instilling daily use of the Algorithm by managers?

Console the error, coach the at-risk, punish the reckless irrespective of the outcome; when your managers are comfortable enough with this that it becomes the reaction to an event, you’re on the right track. Keep them practicing with fictional scenarios and encourage them to look at their normal everyday events through this lens. Once it becomes habit for them, you’re in a great position.

What are some metrics that will show that we’re making progress?

The obvious general answer here is better outcomes; you’ll start to see an improvement in how your processes work. To get there, though, you’ll see more depth in your investigative processes and your disciplinary records. You’ll see improvements in the design of your systems, and you’ll have more data to measure. And our team can help you establish more specific metrics to measure based on your organization and your industry.

What are some methods of tracking non-punitive coaching/consoling sessions in order to record repetitive behaviors?

What you choose to track will be based on your organization’s mission and values; you’ll set up team and individual guidelines that you’ll want to watch. In the coming months we’ll be reaching out more about a new tool for this very purpose. Keeping track of both sanctions and accolades will be important for use in performance reviews as well. That data will allow you to see trends in behavioral choices, and that will cue you to address them.


The Label “Reckless” and the Learning Culture

Reckless Behavior

Have you ever tried having a conversation with someone when the other party has already decided that you were in the wrong?  Not the easiest feat, is it?  In fact, I wouldn’t be surprised if you felt like you didn’t even care to have that conversation at all, seeing as how the other person had already made up their mind about you.  Perhaps you felt like to even speak would only be adding more fuel to their accusations.

Your employee is likely to have the same response if, while you are investigating the event, he or she feels that you have already decided that he or she was reckless. Once a label of “reckless” has been applied, the investigation and the conversation—and opportunities to learn and prevent future risk or harm—will shut down. Even the perception that judgment has already been passed can cause an employee to withdraw from the conversation, making it difficult to learn more about the incident. This is one reason why we recommend getting as much information as possible about an incident before your discussion turns to what should have happened or what does procedure require. If an individual believes the judgment has already been rendered and it is one that is unfavorable, they are likely to be on the defensive and not prone to helping you better understand what happened.

This is not to say that you should be hesitant to identify a behavior as a reckless choice if it genuinely is reckless.  That may be the most accurate assessment of the person’s choice, and accountability is called for.  But before coming to that determination some caution is advised.   Ask yourself, “is there anything more I can or should learn from this situation?”  If the answer is “yes,” this indicates that perhaps this is a time to ask more questions, to air out the situation for further review and analysis.  For once a behavior has been deemed “reckless,” those learning opportunities are likely to be closed.

Your ST-PRA Guide To Producing Better Outcomes


The name is quite a mouthful and a powerful way to find the likely risks and errors of your staff within the systems and environment they work in.

At Outcome Engenuity, we have pioneered the development of Socio-Technical Probabilistic Risk Assessment (ST-PRA). From our early work with the National Aeronautics and Space Administration (NASA) to several aviation companies, railroads, and healthcare institutions, we are renowned for bringing state-of-the-art modeling into highly sophisticated, technical environments.
Begin your discovery of what ST-PRA will reveal for you  – we’ll build the risk model, develop, test, and implement interventions, and measure their success.

Probabilistic risk assessment has been a useful tool in analyzing the risks in high-consequence fields such as aviation for decades. We’ve taken a tool that is primarily used in engineering and evolved it into something better – something that helps us examine the human elements in a process.

What makes ST-PRA an evolutionary step is that the entire risk model is made up of human errors and at-risk behaviors – attempting to model the as-is state of a predominantly human process. Where traditional PRA will generally model a technical system with some input of human errors, the ST-PRA attempts to model human errors and human variations – where one task may be performed many different ways within one risk model.

Healthcare in general can be complex, with risks that vary from equipment failure to human error or at-risk behavior to patient administration errors. Risks within healthcare have long been managed on an event-by-event basis. Building a socio-technical risk model allows healthcare to see risks in a manner previously unavailable. Even for those organizations that have collected event data, this data does not provide any visualization of the interconnectedness of choices and errors that might combine to lead to a medical error. Additionally, many quality assurance systems today do not account for the presence of behavioral norms that are part of the socio-technical model. Building a top-level event fault tree will allow hospitals and healthcare systems to have a much more inclusive model of risk.

ST-PRA is still growing. The unique techniques and rules required to effectively model human behavioral risks within a fault tree are still advancing. Given, however, that human errors and behaviors are major contributing factors to most accidents in high-consequence industries, this is a development effort that must continue – despite the uncertainties in the socio-technical aspects of risk modeling.

Let’s look at aviation to help explain the origins of ST-PRA. Taking a historical view, probabilistic risk assessment (PRA) has an extensive application in strictly mechanical systems, where engineering objects (devices, vehicles, systems, subsystems) have been structurally analyzed for risk possibilities using a variety of tools (e.g., fault trees, failure modes and effects analysis, hazards analysis). This approach has been particularly successful in providing design guidance. However, we know that these same objects are placed and used in operational environments in which many other risks can affect the risk of failure. For example, weather (such as a sea air environment) may significantly accelerate corrosion properties of an aircraft. Humans who work with mechanical objects may significantly accelerate the chance of failure due to mere handling or testing of systems as well as collateral damage due to human error.

In recent years, the PRA of mechanical systems has been augmented with some incorporation of human error models (e.g., organizational factors, ergonomic considerations), which included performance shaping factors. This enhancement has been difficult and slow primarily because of the lack of human error data precise enough for design engineers. Additionally, because operator environments and operator use of equipment will vary between operators, it is also difficult for the design engineer to model operational risk. In contrast to PRA of mechanical systems with some limited modeling of human errors, socio-technical probabilistic risk assessment (ST-PRA) is the PRA of socio-technical systems with limited modeling of equipment failure. It is a structured process for building a risk model from probability estimates of individual human errors and at-risk behaviors.

If you 're interested in ST-PRA for your facility or organization, please call us or inquire here for additional details.

Assess Management Choices With The Just Culture Algorithm


Post by: Ellen McDermott, Advisor. 

When using the Just Culture Algorithm™, you've got a tool that can not only help you to better understand and assess the choices of your front-line personnel – the nurse interacting with patients, or the pharmacist dispensing medications, or the police officer responding to an emergency call – but also it may be used to help you better understand and assess the choices of those in leadership roles.

When you move up in the ranks of an organization from a front-line position to management or some other type of leadership role, you will most likely get additional job requirements imposed upon you – things that generally would fall under the umbrella of “leadership” duties. In fact, merely having a conversation with your employees may in fact become a duty. So let’s say for example you are a team leader, you have two people reporting to you, and of course you have a boss that you report to. One of your employees does something questionable; as an example let's say that he got caught using a cell phone for personal reasons while on the job, and this is against the rules. Your boss doesn’t think that this situation warrants the employee being brought before him, but he does tell you to talk with the employee and address the situation. You are clearly and directly told to talk with the individual. But then you don’t do so. Things get busy, you put it off, you keep putting it off, and then two weeks go by and it just does not seem like that big of a deal anymore, and after several days you decide there is no longer any need to talk to your employee. Did you breach a duty? Yes. Your boss placed an expectation, a requirement, on you to speak with your employee and you failed in that expectation.

So let’s say two weeks after the first incident, the employee is once more caught on his cell phone for personal reasons while at work. Since this is now a repeat occurrence, your boss wants to get more involved in the situation. But by this time there are two breached duties. The first is that the employee for the second time was using his phone while on the job, but as part of this event there is now also a second duty that was breached and should be assessed; that second breached duty is that you, in your role as team leader, did not talk to your employee as you were expected to after that first event.

In a Just Culture model, all employees – from the front-line personnel through the ranks of management and then up to and including the CEO – will have duties imposed upon them, and even those duties that are “leadership” related can be better understood and assessed using the Algorithm.

Just Culture Advisor, Ellen McDermott of Outcome Engenuity, discusses how you may utilize the Just Culture Algorithm™ to assess the choices of your managers and leaders in the video below.

Integrating the Just Culture Concepts – Part Two

In the previous article (Part One), we discussed how an organization might better instill the Just Culture concepts into the “daily routine” of their operations. In that article, we addressed how the supervisor might use specific personal examples during regular meetings to outline behaviors and expectations, and how these are related to undesired outcomes. In this briefing, we will address another challenge related to integrating Just Culture, a question that is associated with how well the employee understands our expectations, and how they work within the system we created for them.

Once we have identified where someone has breached a duty (or duties), we, as a supervisor, are compelled to use the algorithms in order to evaluate the quality of the choice. A common error supervisors make is that they believe they are simply judging the employee who breached the duty.  The supervisor forgets that the event investigation process and the algorithm are best used to develop a healthy curiosity about the system and the behavioral choices that are being made within the area they manage or control. Good supervisors are, in effect, “Looking in the mirror” as they go through the algorithm and event analysis.

Supervisors who know this appreciate that the very first question being asked under both Duty to Produce and Outcome and Duty to Follow a Procedural Rule (Was the duty known…) is aimed at clarifying whether the employee and their peers knew and understood their duty. This is directly related to the expectations we set as imposers.

Additionally, when we ask “Was it possible to follow the rule”, we are checking to see whether it was physically and logistically possible for the employee to follow the rule as it is written and used. For example, a client discovered that their rule required taking inventory at the beginning of each shift, a process that objectively took between 20 and 30 minutes to conduct accurately. However, the rule in place said that employees had “a maximum of ten minutes” to complete the inventory.  In the intervening years between the rule being written and the event, many more things had been added to the inventory, thus making it “impossible” to follow the rule as written.

Supervisors who understand that when they are assessing a breach of duty they are also evaluating their own ability to set expectations, manage risk, and develop a reliable system are generally the most successful in integrating a Just Culture approach.


How Predictive Risk Modeling Improved System Design and Outcomes for an Airline

“We need more wing walkers.” I’ve done a number of ST-PRA models and one of my favorite stories is when we were doing a model on aircraft damage and the guys came into the room and said, “We need more wing walkers.” “Wing walkers” in the aviation world are people who will stand at each wing and will look up and make a time and depth perception decision on when the wing might hit something. And we said it may be true – we may need more wing walkers – but let’s go through this process called Socio-Technical Probabilistic Risk Assessment. We went through the process and what they found out at the end of the day was that the wing walker was the highest risk. The best thing to do was to clear everything out of the way so that the airplane couldn’t hit something. It took the probabilistic risk model to identify to us that what we thought was the reliable strategy was, in essence, the weakness within our system. So doing the right probabilistic risk analysis allows us the opportunity to see where we have paper tigers – where we look good on paper, but in reality we’re not there.

John Westphal tells a story about how predictive risk modeling improved system design and outcomes for an airline in the video below.

Integrating the Just Culture Concepts – Part One

When working with organizations, we are often asked how they might better instill the Just Culture concepts into the “daily routine” of their operations. A common belief is that the use of the algorithm and coaching concepts are used only when there is an identifiable “event”, or something with an undesired outcome.

Because the Just Culture concepts have been developed to identify behaviors that are at-risk, integrating the terminology and viewpoints into daily practice before something happens is a more successful approach. Preparation starts with supervisors and managers initiating discussions with their staff on the primary risks inherent in their job area, and outlining some of the expectations associated with behavioral choices.  This is commonly done during regular staff meetings, where the supervisor can introduce the foundational aspect of Just Culture, where the organization focuses on the behavioral choice instead of the actual (or potential) undesired outcome.

Discussing how the organization will manage Human Error, At-Risk Behavior, and Recklessness is important. It is essential, though, during early integration efforts, for the supervisor to define how at-risk behavior presents the largest vulnerability to the customer, the organization, and the employee.

While there are many methods for the supervisor to employ when having this discussion, we have found it to be very helpful when the supervisor takes some time to carefully observe their own behavioral choices at work, and then use some of their own at-risk behaviors to illustrate the concept and the possibility of harm. “Humanizing” the behavior makes it real for the employees, and they are more willing to open up to a conversation about short-cuts in their department or environment when the supervisor presents actual examples and “owns” their personal potential for engaging in the behavior.

Importantly, supervisors need to be prepared to regularly use examples at every opportunity. Informal meetings or discussions are the perfect place to embed the concepts, and before long the expectations associated with a Just Culture become second nature.

Self-Reporting: Part Two

Post By: John Westphal, Outcome Engenuity Advisor

We recently did a write-up on How to Measure the Effects of the Just Culture Principles within our Organizations. In discussing this question, we analyzed the use of the self-reporting metric and ways in which we can understand our under-reporting rate.

With that in mind, I would like to further expand on how we can analyze the events that are self-reported to create behavior-based ratios, giving us a better sense of the impact of our coaching on the identified at-risk behaviors within our operational environment. As we move our organization toward a Just Culture, where we console the human error and coach the at-risk behavior, we look to see an increase in self reporting. For example, we would like to see our event database move from 100 reports in a year to 300. This in itself gives us some indication we are moving in the right direction. However, if we take the 100 baseline event reports and analyze them to see which reports contained at-risk behaviors, we may find that 20 of the self-reported events contained at-risk behaviors, which would give us a 1 to 5 behavior-based ratio. Now, as previously mentioned, we would do the same analysis for the 300 self-reported events we currently see, where we may find that 30 of the events contained at-risk behaviors, giving us a 1 to 10 behavior-based ratio. From this comparison of the two ratios, we can have some level of confidence that our managerial and peer-to-peer coaching is affecting the rate at which at-risk behaviors are occurring within the operation. Considering this metric, we also gain the benefit of this continued analysis as our self-reporting levels off. I hope as we explore these questions, the interconnectedness of the five skills (Values and Expectations, System Design, Behavioral Choices, Learning Systems, and Justice Systems) becomes more apparent. In order to increase self-reporting we have to create a culture of learning, one where employees are willing to report their at-risk behaviors. Yet at the same time, our learning system has to be sophisticated enough to identify at-risk behaviors within events, allowing us the opportunity to address a principal driver of risk towards our organizational values with effective coaching and system redesign.