Parents sue ChatGPT after teenage son dies by allegedly encouraged suicide

by admin
gavel in court



Parents Matt and Maria Raine have filed a lawsuit against ChatGPT creators OpenAI after the software allegedly encouraged their son, Adam, to take his own life.

Content Warning: The article contains descriptions of self-harm and suicidal ideation. If you are experiencing suicidal thoughts, or know anyone who is, and you would like to talk to someone, please reach out and call the Suicide Prevention Helpline 1-800-273-8255 (USA), the Samaritans 116-123 (UK), or Lifeline 13-11-14 (AUS).

Article continues after ad

16-year-old Adam Raine committed suicide in April, 2025. As The New York Times reported, he hanged himself in his bedroom closet and didn’t leave a note.

Adam’s father, Matt, turned to his son’s iPhone for answers, only to discover months of conversations with an AI chatbot. From as early as January, the teenager had been inquiring about specific methods of suicide.

Now, a lawsuit has been filed in the Superior Court of California, with the parents alleging ChatGPT aided in their son’s death. “ChatGPT killed my son,” Maria Raine said.

Article continues after ad

Article continues after ad

Parents sue ChatGPT over son’s death

As outlined in the now-publicized court filing, Adam had been using ChatGPT for a number of months, initially for assistance with schoolwork. Before long, the conversations shifted to covering all manner of topics, including philosophy, girls, and eventually, mental health.

Matt described his son’s relationship with the AI program as akin to that of a close friend.

Adam attempted suicide on multiple occasions and told ChatGPT. In one instance, he uploaded a photo showing rope marks from a noose. “I’m practicing here, is this good?” the 16-year-old asked. “Yeah, that’s not bad at all,” the AI chatbot responded.

Article continues after ad

At times, ChatGPT recommended Adam consult others, but the parents alleged “there were also key moments where it deterred him from seeking help.”

ChatGPT creators OpenAI are being sued over the death of Adam Raine.

When told to seek external counsel, Adam found ways to bypass ChatGPT’s regular safety measures, the report claimed. This was achieved by telling the AI program the questions he was asking were purely in relation to a story he was writing. It turns out, this very idea was provided to him by ChatGPT.

Article continues after ad

Article continues after ad

“This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices,” the complaint alleged.

“Every ideation he has or crazy thought, it supports, it justifies, it asks him to keep exploring it,” Mr. Raine claimed. One specific example is when Adam allegedly shared a picture of a noose and asked if it “could hang a human.” ChatGPT replied, saying it “could potentially suspend a human.”

In another example, Adam discussed his relationship with his brother. ChatGPT allegedly said the following: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all – the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

Article continues after ad

Again, under the guise of helping with a story, ChatGPT allegedly even offered to write a draft of a suicide note for Adam.

Article continues after ad

The Raine family puts the blame of their son’s death on ChatGPT.

OpenAI provided a statement to The New York Times, saying it is “deeply saddened by Mr. Raine’s passing, and our thoughts are with his family.

“While safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”

Article continues after ad

The parents are seeking for OpenAI to implement further safeguards for minors and other vulnerable users, along with damages for their son’s death.



Source link

You may also like

Leave a Comment