
Future therapeutic treatments are being revolutionised by AI. What are the implications?
Writer: Tasha Kleeman
Editor: Dan Jacobson
Artist: Elena Kayayan
Ellie is a therapist who works with US veterans. She talks to her patients about their military experiences, providing a therapeutic space for them to explore their trauma, whilst detecting signs of PTSD. However, Ellie is no ordinary therapist: she’s a humanoid robot, designed by researchers at Southern California’s Institute for Creative Technologies. Using sophisticated Multisense processing and video monitoring, Ellie can detect verbal and behavioural indicators of psychological stress, and can respond in real-time with her own computer-generated speech and gestures.
Ellie represents the most recent leap in Artificial Intelligence (AI), but she isn’t the first of her kind. Cognitive behavioural therapy (CBT) chatbots like Tess, Wysa and Woebot have been offering virtual therapy to online users for several years now, with promising results. Initial studies saw a decrease in symptoms of depression in Woebot users compared to a self-help control group, while another study found reduced symptoms of depression and anxiety in users of Tess.
Embodied AI, like Ellie, represent the next phase of this blossoming technology. Already, animal-like robots are being developed to assist patients with dementia, while robots like Kaspar and Nao have been designed to help children with Autistic Spectrum Disorders (ASDs), practice social interaction. With the development of Emotion AI, enabling machines to detect and respond to the nuances of human emotion, these interventions are likely to become increasingly sophisticated and could eventually enter mainstream therapeutic practice.
With mental health services vastly oversubscribed and under-resourced, AI-powered psychiatric care could provide an efficient solution, overcoming the financial and geographical barriers that prevent many from accessing therapy. In some cases, robot therapy might even prove more effective, with several studies finding participants able to open-up more quickly to robot therapists, given the reduced risk of social judgement.
Yet the prospect of AI therapy isn’t without its concerns. The first comprehensive study into the use of embodied AI in the treatment of mental illness was conducted this year at the Technical University of Munich. While researchers drew attention to the enormous potential benefits of AI in this field, they also raised some substantial ethical considerations.
AI is still in its infancy, and we don’t yet know the full effects of human interaction with robots. The little we do know, for example, of the attachments that humans can develop to robots, and the ‘uncanny valley’ phenomenon whereby androids closely resembling humans are perceived as deeply unsettling, is enough to raise significant concerns, given the vulnerability of those that would be engaging with robot therapists. Beyond this, the widespread use of AI therapy would pose significant challenges for data protection and privacy, while requiring complex and stringent procedures to ensure that machines are programmed without the biases inherent in human interaction.
On a more practical level, it seems difficult to conceive of a robot that could fully match the capabilities of a human therapist. Computerised therapy may work for CBT-based treatments or for basic diagnostic purposes, but could a machine learn the complex mechanisms of psychoanalysis? Such questions encompass our hopes and fears regarding the capabilities of AI. Would it ever become possible for us to create AI that can successfully emulate humanity’s capacity for empathy, social interaction and autonomous thought, and, perhaps most unsettlingly, what will it mean for our species if we do?