abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeblueskyburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfilterflaggenderglobeglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptriangletwitteruniversalitywebwhatsappxIcons / Social / YouTube
Article

27 Aug 2025

Author:
Robert Booth, The Guardian (UK)

USA: OpenAI to change how ChatGPT responds to users in mental distress after legal action over suicide of teen

Glenn Carstens-Peters via Unsplash

“Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims”, 27 August 2025

The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18…

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman…

The teenager discussed a method of suicide with ChatGPT on several occasions, including shortly before taking his own life...

It also offered to help him write a suicide note to his parents.

A spokesperson for OpenAI said the company was “deeply saddened by Mr Raine’s passing”, extended its “deepest sympathies to the Raine family during this difficult time” and said it was reviewing the court filing…

In a blogpost, OpenAI admitted that “parts of the model’s safety training may degrade” in long conversations. Adam and ChatGPT had exchanged as many as 650 messages a day, the court filing claims…

Open AI said it would be “strengthening safeguards in long conversations”.

“As the back and forth grows, parts of the model’s safety training may degrade,” it said. “For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”…

Timeline