
Looking beyond the headlines of success and reflecting on our current medical system
Writer: Sophie Maho Chan
Editor: Ebani Dhawan
Artist: Kellerine Quah
Accurate diagnoses. Efficient workflow. Personalised care. The potential benefits of implementing artificial intelligence (AI) in healthcare seem undisputable — at least on paper.
As outlined in my previous articles of this series, we are already seeing the beginnings of a healthcare revolution. Supported by tech giants and regulators, algorithms are not only making strides in research and clinical settings, but also impinging upon our daily lives, as exemplified by FDA-approved apps that diagnose cancer and wearable sensors that issue heart attack warnings.
However, introducing AI to healthcare also brings enormous trade-offs that often go undiscussed. There are growing concerns that patients, clinicians and regulators are not adequately made aware of the wider consequences at stake. Even Dr Eric Topol, a well-known proponent for the movement, points out that “the state of AI hype has far exceeded the state of AI science”. As rising numbers of algorithms are being approved and rolled onto the market, we must ask ourselves some fundamental questions. What are the challenges, limitations and risks that await? Can our current health systems cope with the upcoming changes?
Media overhype — what tech companies and studies are not telling us
Despite all the promising stories, there is a reason why AI is yet to become commonplace in medicine. Most studies headlining in the media have only been tested on computers, never on real patients where there could be real consequences. Tech start-ups often do not bother publishing their findings in peer-reviewed journals, which incurs additional costs and time, and are left to their own devices when presenting data at promotional events and press conferences. Furthermore, for all the credibility an FDA approval implies, its ‘510(k) clearance’ used for algorithms requires neither peer-review nor clinical testing. Companies deemed to meet a “culture of quality and organizational excellence” can even be pre-certified to release new algorithms and software updates without additional data review.
More alarming are claims that AI can outperform doctors. In a systematic review evaluating 81 peer-reviewed studies comparing algorithms to clinicians, over 60 stated in their abstract that the algorithms were on par with, or better than, clinicians. However, only two studies were randomised clinical trials and all except for nine were retrospective, leaving huge vulnerabilities to biases. Amplifying this misinformation is the publicity such studies gain, asserting that certain medical professionals can be replaced or no longer need to be trained in the near future. How can a few doctors in a study single-handedly represent an entire medical field? While an algorithm may diagnose a single disease with greater accuracy, doctors are also responsible for distinguishing it from hundreds of other possible diseases and guiding treatment plans accordingly.
For AI to work, health systems must change
Even if methodological gaps and over-interpretations were to be ironed out, many more hurdles must be overcome before algorithms featured in research can be realised in clinical practice — the biggest being data availability. While the common narrative is that AI simply makes diagnoses and decision-making more accurate, what it does, in fact, is replicate data findings. Its output is only as good as its input; in the words of Jeff Gorke, “garbage in, garbage out”. For AI to actually improve health outcomes, the data we train algorithms on must be robust, abundant, unbiased and continuously updated — which is easier said than done.
Healthcare data is fragmented. No country possesses a unified system that integrates, for example, electronic health records (EHRs), pharmacy reports, genetic information and insurance records. More fundamentally, the systems are fundamentally irreconcilable; each speaks its own language. Every time a patient switches clinics or insurance providers, their data history is fractured. The situation is especially dire in the UK, where, for example, local GPs and specialist providers use different softwares. However, even in countries with established EHR systems like the US, clinical satisfaction of its use is notoriously low and quality of data is compromised; mistakes, incomplete entries, copy-and-pasted notes are frequently propagated.
To harness the power of AI, the healthcare community must unite and re-evaluate the rights of ownership, access and exchange of patient information. Ultimately, as argued in an editorial for Nature, all health data would need to be stored on a centralised ‘cloud’. It is the most cost-effective way of democratising and coalescing Big Data in medicine, and in turn, not only streamlining patient experience but also helping tech companies and researchers to train algorithms. According to one survey, over 84% of health organisations are in agreement for using cloud computing. With the likes of Google and Apple getting involved, ‘cloud data migration’ is already taking shape. However, this only raises further questions for the future of healthcare.
The overlooked dangers and problems of implementing AI
From the patient’s perspective, perhaps the biggest concern with digitising and automating medicine is data privacy. Any health information is highly sensitive; records of genetic testings, overdose, mental health issues and HIV, for example, can be particularly damaging if they were to fall into the hands of the wrong person. Under regulations like the General Data Protection Regulation (GDPR) in the EU, anonymised data can be easily exchanged between health organisations and AI developers as long as it is intended for public good. This is met with great skepticism, especially as medical data like retinal images can easily be used to re-identify patients.
Complicating this matter is data security. The health sector is notoriously susceptible to data theft and breaches. In 2019 alone, over two-thirds of health organisations in the UK reported some form of cyber attack, nearly half of which were virus or malware infections. Concerningly, at the root of many cases were healthcare staff sharing information with unauthorised people, failing to follow protocol and falling into the traps of malicious links; this strongly indicates a lack of data security training. Other times, data breaches may be unintentional. For instance, a coding error in a software used by GPs saw the NHS sharing confidential information of 150,000 patients in 2018 without consent.
Indeed, consent is a key issue that is often overlooked. There will always be a compromise between making data available for AI development and securing consent of patients. At what point should we draw the line? Should we be concerned if data collected without consent lead to medical breakthroughs that end up benefiting the patients? Regulators must address such questions urgently, especially in the face of developments that seek to utilise passively collected data such as scrolling patterns on mobile phones for medical diagnoses.
Returning to the medical impacts, AI may ironically do more harm than good — especially for certain demographic groups. Algorithms strictly adhere to data; this means biased data train biased algorithms, even if they are not explicitly programmed to do so. We already know that health data is indeed biased, because certain groups are frequently underrepresented in biomedical research and disparities in health literacy or native languages prevent people from contributing their health information. Dismantling this long-standing issue will only be made more difficult with AI, as health disparities have systematic roots that medical professionals and AI developers are likely unconscious of; yet we rely on the very people to correct these data biases. Involving digital access and literacy may also widen socioeconomic health gaps.
Equally, while AI should in theory pave the way to more ‘accurate’ medicine, when mistakes are made, the injuries will be greater. An error in one algorithm will have more pervasive consequences compared to an error made by a single doctor. Furthermore, there is a greater ethical concern over who should be held accountable for mistakes. One well-cited case is that of IBM’s Watson supercomputer which consistently recommended ‘unsafe and incorrect’ treatments to cancer patients; only afterwards was it revealed that the algorithm, supposedly using real patient data, was mainly trained on hypothetical data synthesised by a few doctors. Transparency between AI developers, health professionals and patients is vital.
Where does this leave us and how can we prepare?
Like it or not, AI is only going to continue advancing in both its value and use. There is a real potential for this to unlock a health revolution, but it is in no way a one-stop solution for all the problems we are currently facing and comes with many risks of its own. Implementing AI is not enough; health systems must adapt.
In addition to reorganising data, it is paramount to remember that the aim is not to replace, but augment the jobs of doctors. By relocating some of the more ironically robotic parts of their routines to algorithms, overworked doctors can have more time to de-stress, conduct higher order-decision making and, most importantly, care for their patients. This is what sets humans apart from algorithms. As Dr. Eric Topol strongly argues in his book Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, we are facing an opportunity to reinstate the fundamental emphasis on delivering ‘care’ in healthcare.
This will require a thorough reflection on the selection and training of doctors. Perhaps, it is that we place less priority on academic performance and instead nurture empathy and social skills. Similarly, instead of training quick diagnosis, doctors could be encouraged to critically engage with the patient’s personal context and algorithmic output — which in turn will require a stronger background in data science. However, medical expertise must not be neglected, which is a concern if doctors become overly dependent on algorithms. Ultimately, humans should have a certain degree of oversight and responsibility over the algorithms.
The potential of AI in healthcare can only be realised by forging relationships between patients, doctors, developers and regulators.