Successful AI implementation is impossible without the buy-in of clinicians – they need to be engaged and on-side if AI adoption is to meet its full potential.
The good news is that clinicians’ attitudes towards AI are changing – research suggests that there is growing positivity around the opportunities that AI opens up. However, understandable reservations still persist, and it is important that both healthcare leaders and AI vendors pay attention to these concerns and take steps to resolve them.
Time constraints are a barrier to AI take-up
The NHS’ chronic workforce shortage coupled with an ongoing Covid-19 backlog, means that clinicians are facing greater demands on their time than ever before.
Clinicians find themselves in a challenging situation – they are both in desperate need of new technologies to ease their impossible workloads, but are so pressed for time that engaging with and implementing these technologies often feels like an impossible task. It’s not surprising that many of them are resistant to this change.
For AI technologies to be successful, they need to put frontline clinical staff front and centre – tools should be designed with an understanding of how clinicians work and the challenges they face, while also giving clinicians the opportunity to direct the future of the technology.
AI as a collaborator, not a replacement
Although there is a greater appreciation that AI assists rather than replaces humans, there is some concern that AI might render some clinical tasks obsolete and maybe even replace clinicians in some circumstances.
While understandable, this is far from the truth.
Although humans can’t compete with machines in terms of scale or speed of analysis, there are obvious limitations to AI’s capabilities. For example, AI can’t deal with information that falls outside a recognisable pattern, and in most scenarios cannot take on the role of direct patient interaction.
In screening processes, AI technology is already being used to help direct clinicians to where they should focus closer attention. AI provides clinicians with more information, enabling highly accurate decision making, while also increasing human productivity. This frees up clinicians to spend more time doing higher-value work, and to spend more time in direct patient care.
A recent piece of ground-breaking research by Moorfields Eye Hospital showed that AI can match world-leading experts when detecting serious eye conditions: AI made the correct referral decision for over 50 eye diseases to a 94% accuracy level, showing promise for the technology to be used to help clinicians pick up on potentially sight-threatening eye conditions earlier in the disease process, and to prioritise patients who urgently require treatment.
Machines and health equity
There are understandable concerns that AI will inadvertently continue – or even accentuate – the unconscious bias in healthcare, which many believe is driving poorer healthcare access and outcomes for disadvantaged groups.
Bias can certainly emerge through the data sets used to train AI’s deep learning model: if the data is skewed, the machine learning model will also be skewed, meaning that the AI systems may not work as effectively for underrepresented groups. It’s crucial, therefore, that AI systems reflect the diversity of the populations they are being used for – only in this way will they be able to improve health equity, rather than increasing inequality.
It’s worth acknowledging that purely human-run clinical practice is also biased. The difference, however, is that in AI systems, bias can be detected and corrected much more easily.
Closing the care gap
The unprecedented pressures on the NHS are threatening its ability to provide care to the communities it serves.
AI holds significant potential to close this gap, by increasing clinician productivity and opening up greater access to care. But the full potential of AI will be impossible to realise without first understanding and addressing the concerns of clinicians – the frontline users of this technology.