Skip to content Skip to sidebar Skip to footer
Human Factors Challenges & AI.

“An error is simply a failure to adjust immediately from a preconception to an actuality”.

John Cage.


No alt text provided for this image


The majority of mishaps are still blamed on human error. Human error explanations began to emerge in the 1970s as a focus of accident investigations following disasters like as Three Mile Island and Tenerife, supplementing the prior engineering-led focus on equipment and technical failures.

Human error is now widely accepted throughout safety-critical domains as the cause of the majority of incidents (50–90 percent depending on the domain. The National Highway Transport Safety Administration of the United States (NHTSA 2018) attributes 94 percent of road crashes to driver error, which can be translated as “94 percent of significant accidents are caused by human error.”

Human error is involved in the majority of the UK Civil Aviation Authority’s (CAA)’significant seven’ accident causes and medical error is expected to be the third greatest cause of mortality in the United States. It’s no surprise that statistics like these are used to support the automation of jobs or the implementation of more stringent behavioral controls (i.e. rules or processes) with penalties to dissuade noncompliance.

Accidents still happen despite (and sometimes because of) safety mechanisms like automation, behavioral restrictions, and sanctions.

Indeed, in many fields, including as aviation, road, and rail, the drop in accident rates in more industrialized countries has now reached a plateau.

The traditional understanding of human error has gotten us thus far, but it has also shown the systemic nature of the ‘leftover’ errors, as well as the limitations of existing models and methodologies.

No alt text provided for this image


I: Human Factors and biases? 

 Case study: The Human Factors and Root Cause Analysis Exercise.

During my training sessions, I give my students an incident scenario, divide them into groups to discuss the incident, then I ask them to present the findings and discuss them with the other groups. My main observation was that NONE of the analysis results of each group were the same as of the others!

Each group had a different perspective on the same incident. These different perspectives are caused by their different backgrounds, personalities, and focus, hence, different approaches, and consequently different analysis results.

Keywords for this phenomenon; investigation bias, cognitive bias, confirmation bias, availability bias, and hindsight bias…etc.

II: The Research-Practice Gaps

According to a study by Steven T. Shorrock and Amy Z.Q. Chung published on titled: The Research-Practice Relationship in Human Factors and Ergonomics: An International Survey of Practitioners, this study found a number of discrepancies between research and practice related to organizational affiliation, research/application involvement, experience, and membership in the Society.

HFE research may, ironically, help the practitioners with the most capacity to make a change in companies (greater application, more experienced, outside academic/research institutions). More research is being done to go deeper into the issues raised. However, by addressing a number of issues related to the research, the organization, and individual practitioners and researchers, the research-practice gaps could be narrowed.  

Do our aviation organizing bodies (the ICAO, EASA, FAA, and IATA) have the capacity to bridge this gap?

III: Human Factors Relativity 

 Some would argue that the ” Systems Thinking” mentality has gone “too” far and “too” extreme in mixing socio-economic-political psychological, … etc. levels, that cannot connect the dots (practically)!  

The empirical classification mentality of the aviation industry refuses the complex and highly dependent on theoretical interpretations of the aviation system that do not provide a practical tool that works!

In the real world, the HF means two things for two environments. There’s the practical daily understanding and practice of HF by operators from one hand, and there’s the HF perfectionist perspective by academia and high-level researchers.

No alt text provided for this image


IV: AI (Artificial Intelligence) and the Model of Everything (MoE)

FRAM, SHERPA, STAMP, Net-HARMS, TRACEr, and TRACEr-lite are some examples of the cutting edge models of HF analysis frameworks, for each of them there are pros and cons, but, how practical/practicable these models are? Can our safety officers utilize these frameworks easily? If not, shall we simply use our simple Swiss cheese, Bowtie, or Fishbone analysis? Would this make us “Old School”?

If we borrow the example of: The theory of everything (ToE) which is a hypothetical framework for understanding all of the universe’s known physical events. Since the invention of quantum physics and Albert Einstein’s theory of relativity in the early twentieth century, researchers have been looking for a model like this.

Back to our focus, and in our language, it’s a quest for the Model of Models (MoM) or the Model of Everything (MoE)

The main challenge of the classical HF models is to obtain reliable risk data. Typically, these models have a huge number of variables and they cope with a lot of uncertainty. Artificial Intelligence is used to develop the most effective solutions to these issues (AI). Algorithms are used by AI to assess risk. Expert Systems, Artificial Neural Networks, and Hybrid Intelligent Systems are the three basic categories for these techniques.

Artificial intelligence methods are a valuable source of methodology that can help enhance the accuracy of critical infrastructure risk assessment. As a result, implementing these methodologies could help inform future decisions on safety and the long-term viability of industrial infrastructure [1].

The advancement of AI technology and big data makes this process more intuitive and timely for decision-makers working in the aviation industry “systems”, resulting in more efficient outcomes than traditional approaches.

AI illustrates the why behind model predictions, allowing users to better understand and trust a model, as well as recognize and fix inaccurate AI predictions. Interpretability, trust, and usability of the explanation have been the subject of many studies and practical simulations on human and explainable AI interactions. The ability to identify issues with the underlying model and whether explainable AI can improve genuine human decision-making are both outstanding questions.

We can assess and evaluate objective human choice accuracy with and without AI (control), AI prediction and AI prediction with explanation using real datasets [2].

V. Final words

 Human error has contributed to the advancement of our understanding of human behavior and has supplied us with a set of methodologies that are still in use today. It is, however, still a nebulous concept. Its scientific foundation and practical application have been questioned.

While (Human Factors and Ergonomics) HFE’s intuitive character has undoubtedly helped it achieve acceptance in a wide range of industries, its broad application both within and beyond the field has had unexpected repercussions.

Embracing that humans can only function as part of larger complex systems leads to the obvious premise that we must move beyond focusing on individual errors to comprehending and optimizing entire systems.

Some theories and methodologies use a systems approach to facilitate this transformation, and the level of knowledge that we have indicates that their adoption will increase.


  [1] Artificial intelligence improving safety and risk analysis: A comparative analysis for critical infrastructure. Available from: [accessed Oct 13 2021].

[2] Does Explainable Artificial Intelligence Improve Human Decision-Making?. Available from: [accessed Oct 13 2021].


Article source:

Leave a comment