Human Error in Diving: Is it really that simple?

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

GLOC

Contributor
Scuba Instructor
Messages
120
Reaction score
147
Location
Malmesbury, UK
# of dives
500 - 999
cPnzZqzRcimLeT0qDXwT_Human_Error_-_Blog.png

It is easy to ascribe ‘human error’ to diving incidents because we often lack details about what happened. It is also perversely satisfying to blame someone, an individual, rather than attribute it to a system issue. Part of this is because we can then start internalising this, distancing ourselves and say that “we wouldn’t have made that mistake”, a natural human reaction.

Unfortunately looking to blame individuals, calling them ‘Darwin Award winners’ or pointing out their stupidity, does nothing to help identify what the real issues were which led to the adverse event, nor do these actions help improve learning because those who have had near misses are scared of the social media backlash when posts are made about events which are so ‘obvious’ in their outcome.

This short piece will cover the Human Error framework from James Reason and look at ways in which we can use this to improve safety and human performance in diving.

The Swiss Cheese Model

Professor James Reason described a concept called the Swiss Cheese Model in his book Human Error. This was a linear process by which layers (defences and barriers) were put in place by organisations, supervisors and individuals to prevent adverse events from occurring. Unfortunately, because nothing is perfect, these layers would have holes in them which would allow errors to propagate on a trajectory. The layers would consist of items such as organisational culture, processes and procedures, supervision and leadership, fatigue and stress management, equipment and system design, and adequate human and technical resources, all with a view to reducing the likelihood of an adverse event.

The model is split into two parts, latent conditions and active failures.

Active Failures

Active failures
are the unsafe acts committed by divers who are in direct contact with the system or activity. They take a variety of forms: slips, lapses, mistakes, and violations. In diving this could be inability to donate gas in the event of an OOA situation, an inability to ditch weight belts, misreading a dive computer or going below gas minimums.

Latent Conditions

Latent conditions
are the inevitable “resident pathogens” within the system. There are two negative effects caused by latent conditions: they create 'error provoking' conditions within the immediate diving situation (e.g. time pressures, understaffing, inadequate equipment, fatigue, and inexperience) and they can create long-term weaknesses or deficiencies in the defences (unreliable alarms and indicators, procedures which are unworkable, design and manufacture deficiencies...) Latent conditions may lie passive and unnoticed within the system for a significant period of time before they combine with active failures and local triggers to create an accident opportunity. Shappell and Weigmann expand on this in their HFACS model.

Reason’s simplistic model demonstrates that only when all the holes are lined up, does an adverse event occur. However, as we know, the world is a dynamic place and humans are somewhat variable in nature, and therefore the holes move and change size too. This means that breaches in the barriers further up the model may be protected by a barrier lower down. Often it is the human in the system who provides this final barrier. This video shows the model in action.

Animated Swiss Cheese Model from Human In the System on Vimeo.

Biases

Unfortunately, when we are looking at incidents and accidents for lessons learned, we come across multiple biases which cloud our decision making processes. A few are covered below and more are on another blog I have written.

Firstly, we are biased in the way we think about time as a factor in incidents. We are used to time as being linear process (because it is!) but adverse events are often a combination of events which don’t necessarily follow the same timeline and involve different systems at play which we are unable to view. After an incident we can piece the jigsaw puzzle together but in real time this is much more difficult to do given our limited short-term memory and as such the decisions we make are based on incomplete information.

qYshH3Z5RHeOLbH6iNdn_20151028-GRL_4504_copy.jpg


Secondly, we suffer from hindsight bias which means we can see factors which are relevant to the event but we were unaware of at the time. An example of this might be someone who is not experienced in an overhead environment becoming disoriented and drowning but observers would say “I knew this was going to happen, you only needed to see their attitude to safety.” and yet that diver may have undertaken similar activities without an issue. Knew infers a high level of certainty whereas we cannot predict with 100% accuracy how things are going to turn out.

"There is almost no human action or decision that cannot be made to look flawed and less sensible in the misleading light of hindsight. It is essential that the critic should keep himself constantly aware of the fact."
Hidden QC

Finally, outcome bias. We judge adverse events which end in an injury or death as much more serious than if the event is a non-event, i.e. nothing bad happened. An example might be a diver who didn’t analyse their nitrox gas prior to a dive and nothing happened, but on another dive, the gas station technician had mixed up cylinders and had filled the cylinder with 80% instead of 32% and the diver suffered from oxygen toxicity and had a seizure at depth. More...
 
Last edited:
What can we do about error management as it is not enough just to know errors exist and that they are ‘bad’?

James Reason identified 12 principles in his book Managing Maintenance Error: A Practical Guide about managing errors. While they were originally based around aviation maintenance, they are equally applicable to diving (or any other walks of life for that matter!)

  1. Human error is both universal & inevitable. We can never eliminate human error but we can manage it and moderate it.

  2. Errors are not intrinsically bad because success and failure originate from the same psychological roots. If you think about it, some forms of innovation are purely down to error, and development is based around learning from failure (imagine a baby learning to walk). It is what we do with errors that is important.

  3. You cannot change the human condition, but you can change the conditions in which humans work. Situations and the environment varies enormously in its capacity for provoking unwanted and unexpected actions. Consequently, we cannot identify and train for every adverse situation, but we can identify likely error-creating situations such as high workload, novel experiences, time pressures and poor communications.

  4. The best people can make the worst mistakes - not one person is immune from human error. We should recognise that the most skilled and knowledgeable often occupy the most responsible positions. As a consequence, their errors can have the greatest impact when considered as part of a wider system.

  5. People cannot easily avoid those actions they did not intend to commit. Blaming people for their errors can be emotionally satisfying but is unlikely to resolve the systemic problem. Blame should not be confused with accountability though. Everyone should be accountable for his or her errors, acknowledge them and aim to reduce their likelihood in the future. ‘Be Careful’ or ‘Dive Safe’ is not enough!

  6. Errors are consequences not causes because errors have a history. Discovering an error is the beginning of a search for causes, not the end. Only by understanding the circumstances can we hope to limit the chances of their recurrence. Context rich stories are therefore really important to identify pre-cursors.

  7. Many errors fall into recurrent patters and by targeting those recurrent error types, we can make the most of the limited resources we have.

  8. Safety significant errors can occur at all levels of the system. We should recognise that making errors is not the monopoly of those who get their hands dirty at the sharp end! Paradoxically, the higher up an organisation an individual is, the more dangerous the consequences of their errors. Think about a Course Director or Instructor Trainer who is passing on the wrong information? As such, error management techniques need to be applied across the whole system.

  9. Error management is about managing the manageable. Situations and even systems are manageable if we examine them with a systems mindset. Human nature is complex and is not manageable so we shouldn’t try to isolate and create ‘safe’ human behaviour. The solutions which involve technical, procedural and organisational measures rather than just behaviour are most likely to last longest.

  10. Error management is about making good people excellent. Excellent operators routinely prepare themselves for potentially challenging activities by mentally rehearsing their responses to a variety of imagined situations - running mental simulations from the mental models we have. Improving the skills of error detection is at least as important as making people aware of how errors arise in the first place.

  11. There is no one best way to solve the human error conundrum because different types of human factors problems occur at different levels of the organisation and require different management and leadership techniques to solve them. Cultures add another dimension that needs to be taken into account too. The most successful interventions are based on closing the gap between work as imagined and work as done by involving those at the sharp end to help provide designs and solutions.

  12. Effective error management aims to be about continuous reform not just focusing on local fixes. Unfortunately, there is always a strong tendency to focus upon the last few errors but trying to prevent individual errors is like swatting mosquitos, the only way to solve the mosquito problem is drain the swamps in which they breed. Reform of the system as a whole must be a continuous process whose aim is to contain whole groups of errors rather than single blunders, at the same time, look for weak signals which might identify a large underlying issue.
According to Reason, error management has three components:
  • Reduction
  • Containment
  • Managing these so they remain effective
and it is the third component which introduces the most challenges because you can’t immediately magic up an ‘Error Management’ solution within an organisation or team, put something in place and expect the results to be delivered immediately. It takes time, commitment and investment - things which don’t always provide tangible results.

Without a tangible output, motivation is pretty hard, especially if you are reducing a small number to an even smaller one, therefore what is the incentive to focus on high performance?

Only you, the reader, can answer that question.

However, I would argue that when we make active decisions to break ‘rules’ or commit a violation, then we are making an assumption that nothing else in the system is broken and we are expecting ‘normal’ to happen. But, everyone else in the system is human too, so are our assumptions still valid?

Finally, if you ever read 'human error' or 'diver error' in an accident report, either they couldn't or wouldn't go further into the investigation to find out the reasons why it made sense for the human to do what they did, what their local rationality was. Either way, ascribing 'human error' is unlikely to fix the latent conditions which led to the accident from happening...

"In the end, “human error” is just a label. It is an attribution, something that people say about the presumed cause of something after-the-fact. It is not a well-defined category of human performance that we can count, tabulate or eliminate. Attributing error to the actions of some person, team, or organization is fundamentally a social and psychological process, not an objective, technical one."
David D. Woods, Sidney Dekker, Richard Cook, and Leila Johannesen

-----------------------------------------------------------------------------------------------

Footnote:

This post first appeared on the Human Factors Academy blog. The Human Factors Academy provides globally-unique classes to improve human performance and reduce the likelihood of human error of occurring in SCUBA diving. These include eLearning, Webinar and Face-to-Face training and development opportunities.
 
https://www.shearwater.com/products/peregrine/

Back
Top Bottom