Why is Healthcare Still not Safe?

Written by Dr Dale Whelehan, PhD in Behaviour Sciences, Trinity College Dublin, the University of Dublin

In the turn of the century a report was published recognising ‘to err is human’. It recognised the fallible nature of humans, and how error-making is a part of work. Nonetheless, ongoing discussions exist in the media and literature as to ‘why is healthcare still not safe?’.

In this article, I will give an executive overview, based on empirical evidence and broad disciplinary considerations, that such issues exist to a myriad of intertwining complex and historically associated variables. There will be reference to the foundations of the safety for patients movement, the growing influence of medical bureaucracy, the flaws in research design and assessment of progress, and an overemphasis on leveraging change at the individual level and not the organisational level.

Foundations of patient safety

First – we need to look back to fully understand the causal factors which led to the formation of focus on healthcare safety. While this article questions why healthcare is still not safe, and could be argued from the lens of injury to practitioners in their work, the focus of this article is on the end user of healthcare work – the patient. Patient Safety is a healthcare discipline that emerged with the evolving complexity in health care systems and the resulting rise of patient harm in health care facilities. As a discipline it aims to prevent and reduce risks, errors and harm that occur to patients during provision of health care. A cornerstone of the discipline is continuous improvement based on learning from errors and adverse events.

Day to day adverse events create a burden of harm which can lead to safety issues for our patients. These include things such as medication errors, diagnostic errors and health-care associated infections.

The issue of patient safety was first raised in 1847 by Semmelweiss who documented the difference in rates of medical harm across two wards of care. Coupled with disparate amounts of research over the coming decades, a seminal study conducted by Moser in the 1960’s called ‘Diseases of Medical Progress’ as well as the Harvard medical practice study first began to succinctly identify instances of safety concern in healthcare. This was the beginning of the research movement and founded with input from the safety sciences. A successful application of this research, and considered a gold standard example to this day, was the application of human factors and safety science research to the discipline of Anaesthesia by a researcher called Jeff Cooper. Large scale reductions in patient harm were recorded from this initial movement.

Alongside this stream of work, a parallel movement was forming, motivated by milestone cases relating to medical error. Confounded by several variables, including media, these cases created a political uproar as each played on a different public fear. Libby Zion, who died as a result of poor supervision and overworked staff. Willie King, whose wrong leg was amputated. Josie King, who went to a prestigious and highly rated hospital for cancer treatment and received a dose that was too high. Elaine Bromley, who went in for a routine nasal operation and died soon after. Growing hostility towards the healthcare professions ensued, with large blame being placed on individuals within systems. There was a true desire to prevent such cases from ever happening again, leading to successful events such as the Annenberg 1 and 2 conferences. These brought together a broad range of stakeholders under the impetus to make healthcare safe in the turn of the century.

A third movement emerged around this time, fuelled in part by both aforementioned movements, but also by publication of significant reports such as the ‘To Err is Human’ report which estimated 44,000-98,000 deaths/year U.S citizens died due to medical error every year. Political will to deal with these issues through ‘shaking up healthcare’ resulted, leading to application of managerialism style interventions. Medical practitioners became managers to deal with issues of safety, leading to the formation of an in-group thinktank for patient safety assurance. Industralisation of medicine ensued, creating a corporate elite of healthcare staff who utilized superficial principles of evidence-based approaches from the safety sciences, in conjunction with Taylorism principles of scientific management, to create faux solutions which were branded as scientific interventions. This has become the dominant movement, and despite efforts, a 2016 John Hopkins Study concluded that little progress has been made and safety issues are the 3rd leading cause of death.

Research design

A discourse has been ongoing in academic institutions since the biomedicalisation of patient safety research. A twenty year review has found that, using retrospective analysis of charts, there exists little shifts in the metrics previously used to define safety in healthcare – whether they be harm, adverse events, or medical error. The interchanging utilisation of these terms has meant the research is heterogenous, and often poorly understood. Utilisation of inappropriate research methods to make sweeping generalisation and conclusions has become a normal issue in the discipline of research, further conflating the difficulty in truly understanding the effective interventions to make healthcare safer.

Along the journey since the publication of To Err is Human, we have lost one of the pivotal segments of the research puzzle – to involve the experts of safety sciences – the psychologists, anthropologists, human factor engineers – as our methodological experts. Instead we have bio-medicalised the discipline, broadening its parameters to include aligned disciplines such as infection control, which has led to a forever squeezing of opportunity for the non-epidemiological disciplines. This positivist approach has meant many disciplines have been effectively silenced. A fundamental realignment to recognise that medicine needs these disciplines to make evidence-based interventions is important.

Organisational behaviourist Dr. Kathleen Sutcliffe has conducted extensive research on the issue of healthcare safety and concludes healthcare is a mindless and vulnerable system. She identifies a series of confounding variables influencing healthcare issues – ranging from micro-issues such as individual performance, mezzo issues such as interpersonal skills, as macro issues such as organisation design and culture. These manifested themselves through behaviours which led to individualism, siloed ways of working, hierarchical governance, and inability to meet growing public service needs. The growing focus on performance science in recent decades from psychologists has also identified a link between staff wellbeing and patient safety. While healthcare workers have always worked above and beyond their vocational requirements, the confounding role that COVID has played on levels of burnout in the healthcare workforce is likely to significantly influence healthcare safety efforts.

Culture

One of the growing issues around patient safety is the risk of poor patient safety culture. When individuals work together, in teams, and not just groups with individuals with personal interests, then a richer and fuller understanding of phenomena can be achieved. Unfortunately healthcare has a long way to go in achieving this vision. Ethnographic research conducted by Dr. Sutcliffe identified that healthcare workers continue to perform well at the individual level, but fail to form a level of team cognition which would allow them to communicate at a system level, and thus towards the higher system-level requirement of patient safety assurance.

Leadership should be the focus going forward, at all levels, with a reduced emphasis on quality reporting as the only metric of safety evaluation, and instead increased effort on system resilience capability building. Supporting such strategic endeavours is the need to build a culture of patient safety which is honest, reflective, constructive and ethically-driven. Viewing the blackbox analogy used in the airline industry as an example of opportunity to learn from adverse event causal effects instead of as an oversight regulatory intervention is where we need to get to in healthcare. To ensure that first, we need to build psychological safety in our teams, where blame culture is removed, and individuals are recognised as just one cog in a complex machine which attempts to make a system as hazardous free as possible. Too long the blame has been on scapegoating the individual as the causal outcome of an egregious error, and instead system ownership to build authentic safety culture, away from biomedicalization and managerialism approaches is needed.

The Solution: A highly-reliable system

So how do we build such desired states? We have two approaches to safety. Safety 1, which focuses on accident causation in risk management. This identifies root causes which differ from normal work. It is managed through regulation. It is bio-medically driven and bureaucratically positivist and includes Heinrich’s pyramid and Swiss cheese models of linear sequential chain models. This approach alone is problematic as it assumes relationships between causes but ultimately blames people for making healthcare not safe.

Safety 2 on the other hand, utilises aspects of accident and high reliability organisation theories. It is interpretative in approach, and focuses on making authentic risks and consequences more apparent, so that they can become reversible. This resilience engineering also enables capabilities in people to adapt to dynamic responses. It recognises that people are performing exceptionally well despite trying conditions. Instead of focusing on causal outcomes, accidents and errors are thought to happen naturally in complex systems. A high reliability organisation is one which can reduce such failures where they might be normally expected. Serious accidents in high risk and hazardous operations can be prevented through a combination of organizational design, culture, management, and human choice. High reliability organisations seek to organize in ways that increase the quality of attention across the organization, thereby enhancing people’s alertness and awareness to details so that they can detect subtle ways in which contexts vary and call for collective mindfulness. There are particular behaviours needed to build a HRO:

  • Treat anomalies as symptoms of a problem with the system and building anticipation and learning opportunities.
  • Building situational awareness is extremely important
  • Refusing to simplify complex problems.
  • Committing to resilience – developing the capability to detect, contain, and recover from errors. Errors will happen, but we are not paralysed by them.
  • Deference to expertise – follow typical communication hierarchy during routine operations, but defer to the person with the expertise to solve the problem during upset conditions.

These behaviours are particularly important as safety is a dynamic non-event. It is dynamically preserved by human adjustment, and a non-event because successful outcomes aren’t visible.

Conclusion

To conclude this article I rephrase the question from ‘why is healthcare still not safe’ to ‘how do complex healthcare organisations achieve safe, reliable, resilience performance under trying conditions’. This allows us to focus on the organisational and behavioural levers we need to pull to evoke effective change. It is a fundamental rethinking of the patient safety discipline that is needed. A focus on not scrutinising the people within the system as the cause of safety issues, but the system itself. The “medicalisation” of the patient safety movement has placed it under the hegemony of bureaucratic-industrialized medicine and rendered the movement with little hope of truly understanding the basis (social and psychological as opposed to medical) of accidents in healthcare delivery. If we want to make healthcare safe, we need to:

  1. Achieve greater consilience and reducing scientific bureaucratic technocracy. Making the human factors the subject matter experts, and the healthcare workers the content experts
  2. Focus on new behaviours and culture – rejecting approaches where rule bound organisations are considered safe, and instead focus on safety culture. Reduce hubristic behaviours and build capability to prepare for the unknown. Safety is not an objective quality, but an activity created and destroyed every minute.
  3. Reshift the research towards the evidence-base and not just cherry picking the parts of patient safety we like such as checklists. Redefine our understanding of what is patient safety research and what isn’t. Similarly, create greater understanding of different latent constructs such as medical harm, medical error – and understand the implications of these terminologies for media dissemination. Foster the energy of the patient safety movement and engage in meaningful patient engagement to design studies.
  4. Start with understanding the dynamic nature of people and system interaction to influence safety. There is a need for focusing on supra organisational interventions which transcend both the individual behavioural and organisational levers of patient safety.
  5. Be driven by science and not by popular movement. Safety in healthcare has a fluid and fractured intellectual history. Vested interests, cognitive dissonance, and oversimplification of complex systems does nothing in the end to make healthcare safer.

Read the latest edition of Hospital Professional News: August HPN

Catch up on our Clinical Features

Leave a Reply

Your email address will not be published.

Please Confirm

This website is only for the eyes of medical professionals. Are you a medical professional?