This second of a three-part article series looks at the scope of current applications of AI
Writer: Sophie Maho Chan
Editor: Ebani Dhawan
Artist: Lucia Gourmet
The global movement towards harnessing artificial intelligence (AI) in healthcare is undeniable. The AI healthcare market is estimated to grow at an annual rate of 41.5% between 2019 and 2025 and to reach a market value of $6.6 billion by 2021. COVID-19 is only adding momentum to this trend; telemedicine is being encouraged to reduce hospital visits and AI-driven data processing is proving vital to keep up-to-date with research as well as contact tracing.
As introduced in my last article, the potential roles of AI in healthcare are various. Driven by the need for more accurate, cost-efficient and personalised medicine, it seems that implementing AI in healthcare is not a matter of if, but when. Everyone from tech giants to start-ups, academics to politicians are racing to produce, test and promote algorithms that can assist medical professionals, if not ‘replace’ them (an issue to be further discussed in the final article of this series). It is true that currently, most algorithms are narrow in application and only a handful of medical fields actually benefit from AI in clinical practice. Experts predict, however, that this will not be for much longer; Dr Eric Topol even warns in his book Deep Medicine that “eventually no type of clinician will be spared”. While this seems drastic, the developments that are taking place are eye-opening.
Here is a glimpse into some of the frontiers of innovation.
AI: The Radiologist’s Assistant
Radiology is at the forefront of AI implementation. The US Food and Drug Administration (FDA) has been particularly supportive of this movement, compiling a list of several dozen approved radiology algorithms and even conducting public workshops on the topic. While our current medical system relies on professionals to draw on experience to visually diagnose medical scans, AI offers the potential for more quantitative, consistent and objective analyses.
AIDOC, an Israeli company, has five FDA-approved and CE-marked (i.e. certified in Europe) algorithms that accurately diagnose intracranial haemorrhages, C-spine fractures, large vessel occlusions, pulmonary embolisms and intra-abdominal free gas. Their algorithms are already used at 300 medical institutions globally, each designed to quickly identify abnormalities from medical scans, immediately alert clinicians and reduce the workload of radiologists. The last point is significant as radiology is currently suffering from a rise in demand but a reduced supply of trained personnel; studies have found that radiologists are squeezed to interpret up to a scan per 3-4 seconds and error rates can be as high as 5%, credited to fatigue and subjectivity. In comparison, AI only becomes more accurate with increasingly available imaging data. A systematic review of over 30,000 studies comparing the accuracy of algorithms to expert radiologists found that the two are relatively matched. While, as reported, many of these studies still suffer from methodological issues, the trajectory is clear; experts believe AI will become more autonomous, even producing first-drafts of radiology reports in the future.
With so much attention surrounding AI’s image-recognition capabilities, it is no surprise that many are keen to implement AI in oncology. Cancer is one of the leading causes of death worldwide and early detection is critical; 90% of breast cancer patients diagnosed early survive for over five years compared to 15% of those diagnosed later. Already, an algorithm developed by researchers from Google and Imperial College London reportedly ‘outperformed’ six radiologists in reading mammograms. Just last week, Israeli start-up Zebra Medical Vision also gained FDA approval for its mammography screening AI algorithm; this could help counter COVID-19-related delays and cancellations in screening. Early detection is equally important in skin cancer. Not only are studies yet again finding that AI can outperform dermatologists, but some companies are bringing these algorithms into our pockets. SkinScan and SkinVision are two CE-marked apps that can supposedly detect skin cancer risk and provide corresponding medical advice purely based on smartphone pictures; SkinVision was even integrated into the NHS in 2019 and prides itself on having helped to detect over 40,000 cancer cases. While it is vital to note that these apps have yet to achieve adequate accuracy to be reliable, they can assist early detection.
AI is also making strides in less pattern-centric fields, most notably cardiology. A study using data from 295,000 patients found that the developed deep neural network AI could integrate 30 different risk factors to predict 7.6% more cardiovascular events compared to the American College of Cardiology/ American Heart Association’s established method. Based on the sample size, this is equivalent to 355 additional patients that would have been missed. More recently, algorithms have reportedly processed multivariable information such as demographics, symptoms, medical histories, electrocardiography and echocardiography to help predict heart failures.
While it is exciting to see such developments in research, what most of us are more interested in is how AI is put into practice. Cardiology is presenting the first signs of publicly-oriented healthcare AI. The Apple Watch Series 4 has an FDA-cleared in-built system that monitors electrocardiograms and alerts users of atrial fibrillations — fast, irregular heart rhythms that underlie stroke and heart failure. Start-up AliveCor takes this a step further; by simply holding a few fingers down on its KardiaMobile device, anyone can produce an electrocardiogram report within 30 seconds on their iOS or Android phones. In addition to atrial fibrillation, it detects bradycardia (abnormally low heart rate) and tachycardia (abnormally high heart rate). By subscribing to KardiaCare, customers can also receive reviews by certified cardiologists and get monthly reports to be readily shared with physicians. These developments are empowering, as they encourage the public to be better informed of our personal health. To doctors, they facilitate the collection of real-time, real-world data about patients, paving the way for remote-monitoring as well as benefiting diagnosis and decision-making beyond cardiology.
AI to Tackle the Global Mental Health Crisis
It seems deeply counterintuitive to use AI in understanding mental health — until we consider the wealth of emotional, cognitive, social and behavioural data that is stored on our phones. This is especially relevant as mental health problems disproportionately affect teenagers and young adults who are most attached to devices. Digital phenotyping refers to the collection of ‘moment-to-moment’ digital data about an individual to quantify their state of health. Typing speed, scrolling latency, voice analyses and social media activity are just some examples of data that can be readily collected through our phones and used by AI to diagnose mental health conditions. In one study, an algorithm applied to 43,950 Instagram photos from 166 people was able to distinguish between people with and without a history of depression purely based on markers such as colours, brightness, facial recognition, comments and likes on posts. Others have successfully predicted marital discord from analyses of voice pitch, volume and prosody, as well as psychosis from muddling and word choice.
Startup Mindstrong is putting all this into practice. Their app remotely monitors mood changes and depressive symptoms by passively collecting data such as typing, swiping and scrolling patterns. To further this, they virtually connect users to therapists and psychiatrists. In the age of COVID-19, which has seen numerous mental health consequences, the company secured funds of $100 million. Another company utilising digital phenotyping is Facebook, which has its own dedicated AI algorithm that analyses posts to issue suicide alerts.
The next step is to coalesce these algorithms attuned to specific ‘biomarkers’ into a single platform for all mental health needs. There is also great interest in integrating data from sleep- and fitness-tracking wearables, which have widely-accepted psychological effects. Other companies are also looking to use AI to treat mental health conditions through digitalising Cognitive Behavioural Therapy (CBT); developed by Stanford psychologists and AI experts, Woebot is one such chatbot app with total funding of $8 million. In the face of the current pandemic, CBT app users are soaring.
While still in the early stages of application, considering the current climate of psychiatrist shortage, AI may prove revolutionary to mental health.
Thinking Beyond Doctors
When we think of AI in healthcare, we intuitively think solely of how this affects the work of doctors. However, healthcare is so much more than diagnoses and treatment — and we are already seeing AI infiltrate other grounds.
Drug discovery is critical to advancing medicine, and yet it currently takes over a decade and roughly $2.6 billion to develop a single drug for the market. AI, by mining biomedical literature, screening molecular libraries, predicting toxicity and analysing cellular assays, can dramatically enhance the process of drug discovery. Desperate, many pharmaceutical companies are already embracing this movement; by 2018, 16 pharmaceutical companies and over 60 start-ups were using AI for drug discovery. London-based BenevolentAI is one notable example. Considering that 95% of all discovered molecules fail in clinical trials, the start-up innovatively uses AI to tap into otherwise discarded data. By integrating information from clinical trials with peer-reviewed biomedical research, BenevolentAI practices “unbiased hypothesis” research, flexibly exploring the millions of relationships between genes, targets, diseases, proteins and chemicals to discover new applications of even once-failed drugs. So far, their work has focused on ALS, Alzheimer’s and, of course, COVID-19. Other companies are focusing on using AI-based simulations to replace clinical trials and animal testing altogether.
We can also reimagine how hospital systems work through AI. Hospital inefficiencies are almost universal; currently, a considerable amount of capital is spent on manually performing repetitive, error-prone tasks. Olive is one company tackling this through AI. Used in over 500 hospitals in the US alone, its algorithm can work 24/7 across departments such as supply chain, clinical administration, human resources and revenue cycle. Instead of blindly performing tasks, however, Olive’s AI can evaluate patterns in past performances to guide decision-making and produce predictive models of hospital operations. Qventus is another leader in this field that focuses on patient flow; from emergency rooms to hospital beds, its AI conducts real-time analytics of who-is-where in the hospital to ensure that length of stays and congestion are minimised as well as that staffing meets patient demands. Amidst COVID-19-induced bottlenecks, both Qventus and Olive have received new publicity, funding and partnerships.
So What Now?
The application of AI in healthcare is seemingly limitless and we are only beginning to scratch the surface of what it has to offer. While difficult to put a time-stamp on when we will see wider acceptance of AI in healthcare, it seems only a matter of time. Yet key questions remain. What does this mean for doctors? What are the ethical implications of relying on machines? Must we compromise the privacy and security of patients? Like all developments, AI in healthcare comes with a dark side — which will be discussed in the next article.