Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
It is a sign of the times that I recently found myself undertaking a short training in terrorism prevention. I was involved in a public event during a time of heightened alert, and the organisers needed some volunteers to position themselves around the building and look out for anyone behaving in an unusual way. I found one of our instructions particularly helpful: ‘You aren’t looking out for an incident that might happen’, the trainer told us. ‘You are actively hunting for an incident that will happen at some point. It may be here and today.’ This simple statement changed my frame of mind completely. I had expected to apply a kind of general curiosity about people approaching the building, Instead, I found myself exercising an active vigilance that was quite new for me.
The instruction also set off a quite different train of thought: about detecting errors in medicine. I started to wonder how we might go about our work as doctors if we regarded adverse events as something that were inevitable in everything we carried out, and considered it our duty to hunt for them in the same way I was being invited to look for trouble as a volunteer. What might we learn if we did this not only in relation to the big mistakes that sometimes lead to disability and death, but also hunted just as diligently for the smaller omissions we commit all the time in terms of communication, record-keeping and so on, and that result in suboptimal patient management, patient dissatisfaction and complaints?
Mistakes are very common in medicine, and have been studied a great deal. We know, for example, how cognitive biases often lead us to misjudge information or cling on to unjustified conclusions.1 We also know how minor failings can lead to serious harm, through the so-called ‘Swiss cheese’ effect,2 when small errors in communication by different individuals all line up to cause disastrous effects. Most practitioners are now aware of the duty of candour to patients when things go wrong,3 and the need to use our mistakes as material for learning.4 In spite of this, we still often treat error as a surprise or an aberration, not as something to be expected and actively sought. Drawing on my brief training in surveillance – combined with experience not long afterwards as a hospital in-patient and exposed to problems with co-ordination of care5 – I would like to propose that we need two significant changes in our attitude to error, as shown in box 1.
Hunting for medical errors: two proposals
We should accept that medicine is an error-generating system, expect that failings will continually occur, and seek them actively.
We should ask patients and carers directly about failings in care, whether small or large, individual or collective.
Medicine is an error-generating system
From a theoretical perspective, medicine is a complex adaptive system. Healthcare generally takes place within a large network of interacting stakeholders – patients, families, carers, professionals and teams. Such numbers of people simply cannot share all their thoughts, motivations and reasoning processes with each other sufficiently to make sure that they are always working in concert. Failures, omissions and mistakes will always arise in the spaces between people’s actions, like weeds in a flower bed. According to complexity science, this tendency is inherent within all complex adaptive systems, and cannot be addressed by rules and regulations alone.6 It requires active monitoring on the ground, and continuous correction as each specific problem arises. We should not wait for it: we should go out there to seek it.
At a practical level, the assumption that healthcare processes continually generate errors is implicit in much that doctors and teams already do in order to avoid risk. In most operating theatres, for example, someone is delegated to read aloud the WHO surgical safety checklist before every procedure. This 19-item questionnaire that covers the the most common surgical and anaesthetic threats to safety.7 Its routine use has reduced serious complications and deaths by over a third.8 Similar checklists are also now available for medical ward rounds, covering such items as the patient’s identity, hand hygiene, prophylaxis of venous thromboembolism, reviewing any decision about resuscitation, along with cannula and catheter checks, and other standard precautions.9
Although such checklists may seem cumbersome at first, they are entirely proportionate to the number of clinical tasks that need to be reviewed for most patients. Using them is also part of wider mindset that the medical ethnographer Rick Iedema has aptly described as ‘chaos limitation’ (R Iedema, personal communication, 2017). This means having an explicit focus on continuity, coordination and quality, and paying extreme care to each patient’s management, progress and care decisions. Given the nature of the systems in which we work, there is a case for applying ‘chaos limitation’ as a governing concept in all healthcare.
Ask for negative feedback
My other proposal, perhaps more controversially, is that we should identify failings by continually seeking negative feedback about care from the people who observe it most closely: patients and carers. Somewhat provocatively, I am tempted to suggest that we should ask every one of them: ‘What have we got wrong in your care today?’ In reality, it would probably be fairer and better for our learning if we framed the question in a more balanced way: ‘What is going well for you today? Is there anything that hasn’t gone so well?’ A great deal would depend on the doctor using the right tone of voice and body language to signal that the inquiry was not a mere ritual, designed only to elicit compliments or banalities. However the question is framed, we should ask for feedback in the expectation that there will have been minor errors in communication and the process of care in most instances, that patients are very likely to be aware of these, and that we can benefit from learning about them in order to prevent further harm.
Experience suggests that transparency of this kind, and a demonstrative openness to criticism as well as gratitude, would lessen dissatisfaction rather than provoke it. What will also matter is whether doctors are willing to hear feedback that sheds unflattering light on themselves and their teams and does not always fit their idea of ‘constructive criticism.’ As a non-medical colleague pointed out recently after a consultation that was interrupted three times by nurses entering the room without an apology, he had neither the duty nor inclination to be constructive – simply the wish to point out unprofessional behaviour that had prevented him talking openly about his fears of cancer.10
Many doctors may feel uncomfortable in seeking negative feedback so directly. Equally, there will no doubt be occasions when patients and carers withhold this out of fear that they may be penalised if they speak truth to power. It may take time for relationships between doctors and patients to evolve to the point where it feels quite natural to have such conversations, so they become routine. Somehow, that is the point we need to reach. If we seriously want to learn from our errors and failings, we should start hunting for them actively, with vigilance and real curiosity.
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.