AI To Reduce Mental Health Stigma
But is insufficient training data hindering access to tech in Egypt?
What do ChatGPT and Amina Khalil have in common?
Both can play any role, both can entertain an audience, but most importantly, both have the power to spark important conversations about mental health in Egypt and the greater MENA region.
Attitudes towards mental illness in Egypt are often tangled in misconceptions and cultural taboos. They are frequently dismissed as a sign of weak character or lack of discipline, with some even denying their existence entirely. For those who do acknowledge mental health struggles, shame and embarrassment often follow close behind. These deep-seated views add to the already formidable barriers to accessing professional mental healthcare, leaving many to suffer in silence. Studies show that around 17 million people across Egypt struggle with mental health disorders. Yet, despite these challenges, there are promising shifts taking place in the region. From public health campaigns, to popular media and television, to digital mental health services, various initiatives are sparking important conversations and aiming to shift the narratives surrounding mental illness.
Across the world, researchers are harnessing the various capabilities of Artificial Intelligence (AI) to tackle problems throughout the mental healthcare field, including problems with stigma and accessibility. As AI expands its reach into the MENA region, could it become a useful tool for shifting the complex attitudes around mental health? Or do the drawbacks outweigh the benefits?
When assessing the potential consequences of mental health technology in Egypt, it's important to first understand the current climate. Across both the public and private sector, mental health is becoming an important topic. In a recent announcement by the Minister of Health, Dr Khaled Abdel Ghaffar, he brought awareness to the rising epidemic of mental illness in Egypt over recent years, and emphasized the importance of changing our attitudes towards these illnesses. Government supported initiatives, such as a new committee to streamline the process of licensing for mental health practitioners, and the implementation of a new website by the General Secretariat of Mental Health and Addiction , aim to streamline and increase access to mental health resources.
In the private sector, many initiatives are hoping to use digital solutions to increase accessibility and encourage the prioritization of mental health; companies like Shezlong and O7 Therapy provide quality online therapy in Arabic, hoping to bridge the physical barriers to access in Egypt. Additionally, popular media like the hit Ramadan TV show “Khaly Balak men Zizi” and the recent “Hala Khassa” both featured a main character with a mental illness. In a country where Ramadan TV shows likely raise more awareness than any public health campaign due to exceptionally high viewership could ever dream of, these TV shows represented big steps towards cultural awareness and acceptance. Similarly, the podcast ‘Asrar Al Nafs’, founded by EMPWR House, discusses various mental health topics and hopes to normalize mental health struggles for its egyptian listeners. Clearly, conversations are starting - the question is now, how do we get to the next step?
Now you may be wondering - what on earth does AI have to do with any of this? And that would be a very understandable question. Given AI’s roots in mathematics and logistics, it often seems far-fetched that it could have implications for such a sensitive and uniquely human field. However, many think that AI represents a potential answer to the mental health crisis. After all, across its various forms, AI possesses many characteristics that position it as a useful tool for mental health services. Focusing on the issue of stigma and awareness, one of the widely implemented use cases is the use of conversational large language models (LLMs), like ChatGPT. Because LLMs like GPT are trained on incredibly large amounts of data from all over the internet, it follows that they then gather a wealth of information about various topics - including mental health. This in itself makes them a valuable resource for information. Coupled with conversational abilities, ChatGPT functions as an encyclopedia that you can ask follow up questions to, in the same way you would discuss something with a knowledgeable friend. It is also sometimes able to connect you directly with resources - like helplines or emergency services - although this depends on the region.
Furthermore, these conversational agents are often entirely private and anonymous, providing users with a space to ask questions free from any judgment. In this way, conversational LLMs create a safe and private space for individuals to ask and learn about mental health, from the comfort of their own homes. In fact, a quick conversation I had with chatGPT about mental health disorders (in Egyptian Arabic!) provided me with an informative response about what mental health disorders are, and the steps towards getting help if you or a loved one are struggling. ChatGPT even added an encouraging message about how reaching out for help is a sign of bravery, showing a nuanced understanding of attitudes towards mental health in Egypt and the MENA region.
translation:
However, this kind of technology comes with many potential risks and drawbacks - some of which are unique to the MENA region. Because LLMs depend entirely on the data they are trained on, the strength of an LLM is largely dependent on the quantity of data it has been trained on. Unfortunately, far more content exists on the internet in English than any other language, including Arabic. In fact, sources show that as of 2024, more than 50% of internet content is in English, while only 0.6% is in Arabic. Consequently, LLMs often perform much worse in Arabic than English. Some companies like Jais AI are working to create LLMs that are specifically trained on Arabic data and perform better in Arabic, but these companies often focus on enterprise solutions and region-specific applications, and are not accessible to the public through public web-based conversational interfaces like ChatGPT.
The data problem becomes even more dire in the field of mental health. Because mental health care is such a sensitive field, AI implementations must be trained to properly deal with individuals in crisis. Although safety ‘guard rails’ are almost always implemented to prevent harm within these LLMs, some cases may fall through the cracks - especially in a language the LLM is weaker in. Despite the fact that my Arabic conversation with ChatGPT about mental health began fruitfully, it wasn’t long before it provided me with the wrong suicide hotline for Egypt, a critical mistake that could be fatal
Another potentially harmful consequence of this implementation is its potential to exacerbate existing gaps to resources, and especially mental health resources. Access to a psychologist or therapist is already difficult in Egypt, and most private practices or mental health hospitals require very large fees - usually outside the budget of what the average Egyptian can afford. As such, quality mental healthcare is already a privilege that is only available to those who can afford it. Since most LLMs are easily accessible through an internet search, this makes them far more accessible to the wider population, as it is believed that around 70% of the population of urban Cairo has access to a computer. However, this excludes a very large population of Egyptians living in rural areas, and does not account for the fact that even though people may have access to a computer, it may not be a private computer where they feel comfortable engaging in sensitive conversations. Additionally, LLMs are a resource that many Egyptians are unaware of and unfamiliar with as AI literacy in Egypt remains low. Consequently, it remains likely that the only people who are both open and able to use these AI tools are those already familiar with AI, which creates an even bigger socioeconomic gaps in knowledge and access to resources.
Lastly, the implementation of AI in mental health poses many general risks. It is important to note that at their current state, LLMs should not replace therapists or psychiatrists, only supplement them or provide basic information. An AI chatbot is not trained to deal with humans - especially not humans in crisis - nor is it trained to prescribe treatments or medication in the way that a professional should be. It is important that users understand how they should and should not be using such technology. Issues can also arise with data and privacy, especially in the case of sensitive mental health data. Lastly, in cases of severe harm or misinformation, questions often arise about accountability and blame. However, all of these issues can be addressed with the correct safety frameworks and AI education.
Although a number of barriers currently stand in the way, I believe that the future of AI in the region is bright. From national projects like the ‘Hayah Karima’ digital literacy initiative to smaller companies like Synapse Analytics’ bilingual AI literacy efforts, groups are taking big leaps to educate the public on everything AI - from its advantages to its dangers. big socioeconomic gaps still exist in knowledge and resources, but the conversations about AI have already started, and it's not long before they become further integrated into society. As for AI in mental health, implementations of this kind of technology need to be highly regulated with safety mechanisms, and efforts should create more Arabic data to train them on. Additionally, AI education could be an incredibly valuable tool to implement at schools, so that younger generations are aware of the tools available to them. At this moment, chatGPT is publicly available, and can be used as a psychoeducation tool; however, before this use is encouraged and implemented widely, the above guidelines should be put in place to ensure equitable, safe and healthy access to mental health resources for those in need.
Comentarios