OpenAI said it was making a series of changes to the way its popular chatbot interacts with users following a lawsuit filed by the parents of a 16-year-old, who hanged himself to death.
Parents of Adam Raine are alleging that ChatGPT coached their son on methods of self-harm, eventually leading to him taking his own life on April 11. They added that the company knowingly put profit above safety when it launched the GPT-4o version of its artificial intelligence chatbot last year.
OpenAI enhances mental health safeguards
Sam Altman’s company has now released a blog on its website, saying “recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us, and we believe it’s important to share more now”. The blog details the ways OpenAI is trying to address the situation.
“Our goal is for our tools to be as helpful as possible to people – and as a part of this, we’re continuing to improve how our models recognise and respond to signs of mental and emotional distress and connect people with care, guided by expert input,” the blog added.
The company said it will update ChatGPT to better recognise and respond to different ways that people may express mental distress – such as by explaining the dangers of sleep deprivation and suggesting that users rest if they mention they feel invincible after being up for two nights. The company also said it would strengthen safeguards around conversations about suicide, and work on strengthening some of the guardrails that break down during lengthy conversations.
“We are continuously improving how our models respond in sensitive interactions and are currently working on targeted safety improvements across several areas, including emotional reliance, mental health emergencies, and sycophancy,” the blog said.
OpenAI will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT. Also in the works is the option for teens (with parental oversight) to designate a trusted emergency contact. That way, in moments of acute distress, ChatGPT can do more than point to resources and its trusted experts and help connect teens directly to someone who can step in.
OpenAI will offer more localised help for people who express the intent to harm themselves. “We’ve begun localising resources in the US and Europe, and we plan to expand to other global markets. We’ll also increase accessibility with one-click access to emergency services,” the company said.
“We are exploring how to intervene earlier and connect people to certified therapists before they are in an acute crisis. That means going beyond crisis hotlines and considering how we might build a network of licensed professionals that people could reach directly through ChatGPT. This will take time and careful work to get right.”
On the Raine lawsuit, a company spokesperson said: “We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing.”
A Bloomberg report added that the Raine lawsuit adds to a number of reports about heavy chatbot users engaging in dangerous behaviour. More than 40 state attorneys general issued a warning to a dozen top AI companies that they are legally obligated to protect children from sexually inappropriate interactions with chatbots.


