If you were having a bad day and your friends were busy, would you connect with a digital buddy? Maybe you do already?

If you noticed warning signs that your mental health was deteriorating – perhaps you’re sleeping badly and finding it hard to get motivated to do anything – would you dial up a virtual therapist?

If you found therapy with a human therapist helpful, then you needed more and that therapist wasn’t available, would you consider some “top up” sessions with a digital therapist?

If you are worried about a colleague’s mental health at work, would you ask a digital psychologist for advice on how to support them?

.

With a global rise in demand for mental health and wellbeing support, the challenge of how to best respond to this increase has become more urgent. While demand has risen, increasing the capacity of skilled mental health professionals to support the demand has proven more difficult – it takes time, resource and the right people to increase the capacity of the mental health workforce.

To bridge the gap between demand and supply, digital mental health solutions have been welcomed, alongside significant investment. These include mental health apps, giving you tips on maintaining wellbeing and managing distress; trackers to monitor mood and sleep; online support groups and virtual networks; and phone or internet access to human therapists in real time. Virtual reality therapy is also advancing. These digital innovations have shown that healthcare can be delivered in a non-intrusive, scalable, accessible, and cost-effective way. Whilst these innovations are important and hopeful, concerns have also been raised about their effectiveness – do they deliver what they promise to, and are they actually helpful for people? It is reassuring, therefore, to see researchers identifying the key factors that make some solutions more effective than others.  

And now, artificial intelligence (AI) is showing more promise as another potential solution, with this quote from a 2023 New Yorker article summarising this promise succinctly:

The worlds of psychiatry, therapy, computer science, and consumer technology are converging: increasingly, we soothe ourselves with our devices, while programmers, psychiatrists, and startup founders design A.I. systems that analyze medical records and therapy sessions in hopes of diagnosing, treating, and even predicting mental illness.

From what we know at this point in time what are the advantages and potential down-sides of using AI to improve mental health? 

.

Advantages: 

Accessibility. AI-based mental health treatments are available to anyone (with an internet connection), from anywhere in the world, at any time. This can be especially useful for people who live in remote areas, or who may not have access to traditional mental health services.

With you all the time. Similarly, AI support can follow us around through our days, and we can use it as often as we wish with no limits on use. From our experience as psychologists, we know that mental health treatments are most effective when they are delivered in small quantities many times over. Most of us don’t have access to a human therapist “on tap” but we can talk with an AI one every hour if we want to. 

Tailored to you. The real-time, machine-learning feedback process built into AI means that solutions can easily and quickly tailor support to individual needs, preferences and characteristics. This tailoring is likely to help improve effectiveness and increase future engagement with the support suggestions made by the AI tool.

Early detection of warning signs. AI is being trialled in various research studies to identify early warning signs of distress, such as changes in behaviour, mood or sleep patterns. This can allow for faster support responses and potential treatments, possibly preventing more severe mental health problems from developing, or at least putting in specific support to reduce the severity of the distress. 

These advantages sound good, don’t they? So, what are the potential down-sides we need to hold in mind?

.

Disadvantages:

Lack of human connection. AI-based mental health treatments lack the authentic human connection that we know from psychological research is often the key ingredient of effective therapy. Feeling a lack of connection may make it more difficult for some people to engage with treatment, or to feel heard and understood.

Limited effectiveness. While mental health apps can be effective for some people, they do not work for everyone. That is also likely to be the case as AI-based “therapy” starts being offered. Some people may require more wrap-around support such as live-in care or, as above, respond better to connection with a human health professional. 

Privacy concerns. Many of the AI systems that are live today in the mental health space are collecting very sensitive personal information. This raises privacy concerns that need to be managed well. There is also the risk of data breaches, which can lead to the exposure of such sensitive information.

Ethical considerations. Along with privacy concerns come significant ethical considerations. The World Health Organization released a guidance document outlining six key principles for using AI in health ethically. The report is the first consensus on  AI ethics within a healthcare environment and concludes: “With their first and second principles – protecting autonomy and promoting human safety – the WHO emphasises that AI should never be the provider of health care.”

Algorithmic bias. AI interventions are only as good as the data they are based on (from the machine-learning programmes on which they are built). As with any learning, machine learning can embed bias via race, gender, age, rurality, income and other factors we may not be aware of yet, making algorithms less effective with more diverse groups of people. Again, WHO’s 5th ethical principle for use of AI applies, that “companies should hire employees from diverse backgrounds, cultures and disciplines to develop, monitor and deploy AI, and to make sure that we are doing our best to make A1 interventions as diverse and relevant to as many different people as we can.

.

What next?

In summary then, there are some very promising advantages of using AI to support mental health, and some concerning caveats. “Clear as mud” as we sometimes say when reviewing best-practice research!

Some AI-based mental health interventions have been shown to be effective in research studies, but most have not yet been extensively studied. So, while we wait for more data from research studies to guide our use of AI, we suggest these recommendations for judging safety and effectiveness.

  1. Use with caution. As we do before setting up any human mental health support, we need to make sure all AI tools are used with caution and responsibility. Data privacy protections must be transparent and effective and, if there are doubts, it’s better to find a different tool.
  1. Monitor effectiveness. Again, as we do with human support interventions, it’s essential that we encourage anyone using AI tools to assess how much and how the support is improving their symptoms or alleviating distress. Over a few days or a week, is this tool helping or hindering this person’s mental health? If it’s a help, then it’s all good to continue. However, if there’s any concern that it’s hindering, we would advocate discontinuing and finding a better alternative. 
  1. Discontinue right away if symptoms worsen. We are repeating ourselves deliberately here as this point cannot be overstated.  Just as we hold the “do no harm” bottom line with human mental health support, we must ensure the same for any AI interventions. We recommend stopping use of the tool right away and tagging in human psychological support. 

AI will inevitably become a cornerstone of the next digital revolution and, because of this, at Umbrella, we  embrace the benefits of technology to enhance our human support offerings. We suggest that AI is unlikely to replace human support, but it’s important to be aware of what technology can do differently that can engage people in ways and at times that clinicians can’t.

We know from carrying out Umbrella research that the three key factors to effective digital mental health solutions are that they should be: integrated, guided and social.  Using this research, we draw on the benefits of technology to enhance our human support offerings to ensure that they are engaging and effective in promoting behaviour change.

Contact us today so we can help you weigh up the best digital and human support services to provide for your people.