Of the 7.5 billion people in the world, 2 billion people will experience mental health issues. In developed countries, 50% of those who need help will get it, but this number sharply decreases in developing countries. Therefore, in both developed and less developed countries, a mental health treatment gap exists.
This is problematic for achieving the 2030 Agenda. Speaking at an event highlighting the potential role of artificial intelligence (AI) and technology tools in addressing the mental health treatment gap, Mr. Thomas Gass (Assistant Secretary General for Policy Coordination and Inter-Agency Affairs, UNDESA) reiterated that under the 2030 Agenda if any group is left behind then our development is not sustainable.
It is for this reason that the 2030 Agenda and the 17 Sustainable Development Goals (SDGs) are people centric. In an historic move, SDG 3 was created to address mental health and well-being. According to Dr. Caleb Otto (Public Health Physician and Mental Health Advocate) if we address the treatment gap, we will achieve the SDGs.
Can AI help to do this? And if so, what are the messages for the policy makers regarding the advantages and the risks?
Dr. David Luxton (University of Washington School of Medicine) described an AI care provider as a machine that performs activities that normally require human intelligence such as tracking human emotions and facial expressions and making health assessments. Such AI care providers could help address the treatment gap because they can adapt to patient needs and can be applied to different cultures and gender. They may also help overcome the stigma attached to seeking mental health support, with some patients preferring to talk to AI than a human care provider according to Dr. Luxton.
Lebanon has pioneered the integration of technology and mental health prevention, promotion and treatment. Electronic Health (e-Health) is an online programme people can use themselves to help cope with depression. This targets people who cannot afford to go see a doctor or those who are too embarrassed to ask for help.
However, there are some limitations of such AI care providers. Speech processing is not yet perfect, and large samples of users are needed to make improvements. What if a patient indicates they are going to harm themselves, who is liable here? There are also cases of people forming emotional attachments to AI – but what if you lose connection?
Ethical issues must also be considered. Are AI care providers culturally informed and culturally competent? There are also major issues surrounding data privacy and security issues, as well as accessibility. Will AI care providers be available to regions of the world where people do not have access to technology? This ethical consideration was emphasised by Dr. Astrid Hurley (Division for Social Policy and Development, UNDESA) who stated that if we do not address accessibility, then the treatment gap will only widen. Ensuring access has to be at the centre of discussions regarding technology and mental health.
To learn more about the Division’s work on mental health click here.
Source: UNDESA DSPD