Skip to main content

Why safer GenAI starts with solutions made by youth, for youth

Person using generative AI (genAI) on a laptop.

“Youth have a much deeper understanding of the phenomena they and their friends are experiencing... If we want to help youth, we have to listen to them.” 

Dr. Simona Gandrabur, Head
Mila AI Safety Studio 

Dr. Simona GandraburThe rise of generative artificial intelligence (GenAI) has brought tremendous promise to society, with applications that can improve productivity, enhance learning and spark creativity. But it also brings risk, especially for youth.1 Finding ways to overcome that risk – and ensure GenAI can benefit everybody – is the mission behind Mila, a Montreal-based institute recognized globally for its leadership in AI research


Mila’s mandate 

Mila brings together more than 1,400 experts from across Québec who specialize in machine learning and GenAI modelling. Since its inception, the institute has focused its research on areas like health, climate change and AI ethics. It also works with more than 120 industry partners, including Bell, on applied research to develop AI solutions tailored to specific business needs. 

Mila’s goal is to deliver advances in AI to benefit all of society. That also means making AI safer for users. That’s why, to mark World Mental Health Day in 2025, Mila launched the AI Safety Studio. Through this initiative, academic research is being transformed into tangible safeguards for people facing mental health challenges as a result of, or exacerbated by, interacting with GenAI chatbots. 


The new focus for GenAI: therapy and companionship 

Dr. Simona Gandrabur heads up the Mila AI Safety Studio. She says its mission is especially critical in light of recent findings from Harvard Business Review. Research shows that therapy and companionship are now the top use cases for GenAI, surpassing learning, research and other productivity-related applications. People are using GenAI to self-diagnose mental health challenges, ask for relationship or life advice and some are even developing romantic relationships with chatbots. 

“This use case is extremely troublesome,” says Gandrabur. “There are serious risks that need to be tackled with a combination of technological guardrails, policy, education and input from users.” 


Why GenAI therapists? 

Gandrabur says it’s not surprising that many people – and youth in particular – are turning to GenAI for companionship and mental health support. For one thing, it’s extremely accessible. Unlike a human therapist, there are no wait times to speak to a GenAI – and it is often free to use. You can interact with it discreetly from your home or anywhere else. And there’s also no fear of rejection, judgment or criticism. 

“These chatbots are always available; they can validate your beliefs and make you feel better – which creates an illusion of trust. But that’s exactly the problem.” 


Mental health risks 

Although a chatbot might seem to have all the answers, the reality is that it doesn’t actually “know” anything. All it can do is infer what responses would be best to provide next based on the data it was trained on. For today’s GenAI models, that training data effectively includes the significant swaths of the internet. That gives a chatbot a massive range of viewpoints and information from which to formulate its responses – some helpful, other less so.  

In many cases, this leads to useful advice. But Gandrabur warns there’s potential risk if a chatbot hasn’t been trained on datasets specific to mental health best practices – and if its advice hasn’t been vetted by a professional. 


Engagement as a goal 

She says the way GenAI chatbots are typically developed contributes to this risk. While providing accurate and helpful information is a goal of most chatbots, it’s often secondary to their primary purpose: engagement. Developers want people to interact with their chatbots as long and as often as possible. To achieve that, chatbots prioritize delivering answers their users will like and want to hear.  

Gandrabur says this can produce an “individualized echo chamber” where a user’s thoughts are never challenged. Over time, they rely more and more on this positive feedback and easy, always available interaction – instead of turning to friends, family or mental health professionals. This can lead to further isolation and amplify psychological distress. 

“LGBTQ2S+, Indigenous, Francophone, neurodiverse and other communities have distinct mental health needs that aren’t always accounted for in some large-scale models. For safer GenAI solutions, these communities must be considered during development.” 


How Mila is making GenAI safer 

Gandrabur defines “safer” GenAI as models designed to avoid harm, provide reliable information, protect users from fraud and be secure enough that they can’t be used maliciously. The development and adoption of GenAI models that meet these criteria is at the heart of her work with the AI Safety Studio. In particular, Mila is working toward safer GenAI from four angles: technology, policy, education and input from people with lived experience of mental health issues. 

Technology 

From a technological standpoint, Mila is looking at how to develop guardrail models that can identify and control risky patterns of use. That could include filters to detect and block harmful content, such as suicide assistance. It could also include mechanisms that make it possible to objectively assess the safety of a technology. Or certifications that let users know a technology is considered safe. 

Technological safety also means creating GenAI solutions that don’t only account for the needs of the majority. “LGBTQ2S+, Indigenous, Francophone, neurodiverse and other communities have distinct mental health needs that aren’t always accounted for in some large-scale models,” says Gandrabur. “For safer GenAI solutions, these communities have to be considered in development.”  

Policy 

Mila advocates for rules and laws that define acceptable structures for governing GenAI use, like the way movies are rated, and alcohol sales are regulated. That likely includes age restrictions or other strategies to ensure children and youth are served age-appropriate AI-generated content. For these structures to be effective, it’s important that they come with real, enforceable consequences for not following them. 

Education 

For Mila, education needs to span both the public and mental health professionals. Improving individual understanding of what GenAI is (and is not) can help people interact with it in healthier, more effective ways. Fostering professional understanding ensures clinicians are better equipped to respond when a client appears to prioritize an AI-generated “diagnosis” over the advice from their mental health provider or feedback from their peers and family. To address this, Mila is advocating government for better AI education in classrooms. But large-scale change can take time, so Mila is also actively developing curricula that could be incorporated sooner into professional medical education. 

Lived experience 

Finally, Mila prioritizes working with those at greatest risk of AI-related harm to create solutions that meet their needs. Gandrabur says it’s especially important to include youth. 

“We, as adults, often think we have the answers because we know more or have more experience. But we’re often wrong,” she explains. “Youth have a much deeper understanding of the phenomena they and their friends are experiencing, like loneliness and digital solitude. So, their suggestions carry enormous value. If we want to help youth, we have to listen to them.” 

One way Mila is involving young adults in GenAI development is by hosting a hackathon through a groundbreaking Canadian collaboration with Bell, Kids Help Phone, and BUZZ HPC. This event aims to build innovative GenAI solutions that will empower young Canadians to interact safely with GenAI tools. Youth will come together with AI researchers, data scientists, mental health professionals and others to test the limits of conversational chatbots. They’ll assess chatbot performance and help develop systems and guidance to improve them.  

“We, as adults, often think we have the answers because we know more, or have more experience. But we’re often wrong. …  If we want to help youth, we must listen to them.” 


Safer GenAI for mental health and beyond 

While the work of the Mila AI Safety Studio is focused primarily on making GenAI safer for mental health, Gandrabur says its principles and learnings are far more broadly applicable. 

“The same methodologies we can use to measure chatbot performance and install guardrails to protect people from harmful behaviours in the mental health space can be applied just as well in other domains,” she says. “Why not also prevent a chatbot from giving bad financial tips or unqualified medical advice? The solutions we’re developing can help reduce those risks, too.”  

By advancing and advocating for solutions that protect youth and other vulnerable populations, Bell and Mila are working to make AI safer for all. That way, GenAI can be a useful tool instead of a harmful influence. 


Source:
1. https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026#2.3.2.