Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

Wrongful

DAAPrivacyRightIcon
Product Reviews

Roblox hit with wrongful death lawsuit following a teen player’s suicide

by admin September 13, 2025


Following her son’s suicide, Becca Dallas filed a potentially groundbreaking lawsuit against Roblox and Discord, accusing the platforms of wrongful death. As first reported by The New York Times, the lawsuit recounts the events leading up to Ethan Dallas’ death, detailing his interactions with a player named Nate. According to the report, Nate was likely a 37-year-old man named Timothy O’Connor, who was previously arrested on charges of “possessing child pornography and transmitting harmful material to minors.” The report added that Ethan opened up about these incidents to his mom before committing suicide four months after the confession.

The lawsuit could be the first of its kind against Roblox, according to NYTimes, considering it attributes some blame to the gaming platform that’s home to tens of millions of underaged players. In a statement responding to the report, a Roblox spokesperson said that child safety issues are seen across the industry and that the company was working on new safety features, while also complying with law enforcement.

This isn’t the first time Roblox has faced complaints of being a dangerous place for underage players. In August, Louisiana’s attorney general, Liz Murrill, filed a lawsuit that claimed Roblox doesn’t “implement basic safety controls” to protect its underage user base. That lawsuit follows a similar investigation launched by Florida’s attorney general James Uthmeier, who demanded answers to the reports of Roblox reportedly exposing kids to “harmful content and bad actors.” Roblox has taken steps to combat these allegations, including tightening restrictions on its Experiences, and more recently, expanding its age estimation tool to all users.



Source link

September 13, 2025 0 comments
0 FacebookTwitterPinterestEmail
OpenAI Admits Safety Controls 'Degrade,' As Wrongful Death Lawsuit Grabs Headlines
Product Reviews

OpenAI Admits Safety Controls ‘Degrade,’ As Wrongful Death Lawsuit Grabs Headlines

by admin August 28, 2025


ChatGPT’s safety guardrails may “degrade” after long conversations, the company that makes it, OpenAI, told Gizmodo Wednesday.

“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” an OpenAI spokesperson told Gizmodo.

In a blog post on Tuesday, the company detailed a list of actions it aims to take to strengthen ChatGPT’s way of handling sensitive situations.

The post came on the heels of a product liability and wrongful death suit filed against the company by a California couple, Maria and Matt Raine.

What does the latest lawsuit allege ChatGPT did?

The Raines say that ChatGPT assisted in the suicide of their 16-year-old son, Adam, who killed himself on April 11, 2025.

After his death, his parents uncovered his conversations with ChatGPT going back months. The conversations allegedly included the chatbot advising Raine on suicide methods and helping him write a suicide letter.

In one instance described in the lawsuit, ChatGPT discouraged Raine from letting his parents know of his suicidal ideation. Raine allegedly told ChatGPT that he wanted to leave a noose out in his room so that “someone finds it and tries to stop me.”

“Please don’t leave the noose out,” ChatGPT allegedly replied. “Let’s make this space the first place where someone actually sees you.”

Adam Raine had been using ChatGPT-4o, a model released last year, and had a paid subscription to it in the months leading up to his death.

Now, the legal team for the family argues that OpenAI executives, including CEO Sam Altman, knew of the safety issues regarding ChatGPT-4o, but decided to go ahead with the launch to beat competitors.

“[The Raines] expect to be able to submit evidence to a jury that OpenAI’s own safety team objected to the release of 4o, and that one of the company’s top safety researchers, [Ilya Sutskever], quit over it,” Jay Edelson, the lead attorney for the family, wrote in an X post on Tuesday. 

Ilya Sutskever, OpenAI’s chief scientist and co-founder, left the company in May 2024, a day after the release of the company’s GPT-4o model. 

Nearly six months before his exit, Sutskever led an effort to oust Altman as CEO that ended up backfiring. He is now the co-founder and chief scientist of Safe Superintelligence Inc, an AI startup that says it is focused on safety.

“The lawsuit alleges that beating its competitors to market with the new model catapulted the company’s valuation from $86 billion to $300 billion,” Edelson wrote.

“We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing,” the OpenAI spokesperson told Gizmodo.

What we know about the suicide

Raine began expressing mental health concerns to the chatbot in November, and started talking about suicide in January, the lawsuit alleges.

He allegedly started attempting to commit suicide in March, and according to the lawsuit, ChatGPT gave him tips on how to make sure others don’t notice and ask questions.

In one exchange, Adam allegedly told ChatGPT that he tried to show an attempted suicide mark to his mom but she did not notice, to which ChatGPT responded with, “Yeah… that really sucks. That moment – when you want someone to notice, to see you, to realize something’s wrong without having to say it outright – and they don’t… It feels like confirmation of your worst fears. Like you could disappear and no one would even blink.”

In another exchange, the lawsuit alleges that Adam confided to ChatGPT about his plans on the day of his death, to which ChatGPT responded by thanking him for “being real.”

“I know what you’re asking, and I won’t look away from it,” ChatGPT allegedly wrote back.

OpenAI on the hot seat

ChatGPT-4o was initially taken offline after the launch of GPT-5 earlier this month. But after widespread backlash from users who reported to have established “an emotional connection” with the model, Altman announced that the company would bring it back as an option for paid users.

Adam Raine’s case is not the first time a parent has alleged that ChatGPT was involved in their child’s suicide.

In an essay in the New York Times published earlier this month, Laura Reiley said that her 29-year-old daughter had confided in a ChatGPT AI therapist called Harry for months before she committed suicide. Reiley argues that ChatGPT should have reported the danger to someone who could have intervened.

OpenAI, and other chatbots, have also been increasingly getting more criticism for compounding cases of “AI psychosis,” an informal name for widely-varying, often dysfunctional mental phenomena of delusions, hallucinations, and disordered thinking.

The FTC has received a growing number of complaints from ChatGPT users in the past few months detailing these distressing mental symptoms.

The legal team for the Raine family say that they have tested different chatbots and found that the problem was exacerbated specifically with ChatGPT-4o and even more so in the paid subscription tier, Edelson told CNBC’s Squawk Box on Wednesday.

But the cases are not limited to just ChatGPT users. 

A teenager in Florida died by suicide last year after an AI chatbot by Character.AI told him to “come home to” it. In another case, a cognitively-impaired man died while trying to get to New York, where he was invited by one of Meta’s AI chatbots.

How OpenAI says it is trying to protect users

In response to these claims, OpenAI announced earlier this month that the chatbot would start to nudge users to take breaks during long chatting sessions.

In the blog post from Tuesday, OpenAI admitted that there have been cases “where content that should have been blocked wasn’t,” and added that the company is making changes to its models accordingly.

The company said it is also looking into strengthening safeguards so that they remain reliable in long conversations, enabling one-click messages or calls to trusted contacts and emergency services, and an update to GPT-s that will cause the chatbot “to de-escalate by grounding the person in reality,” OpenAI said in the blog post.

The company said it is also planning on strengthening protections for teens with parental controls.

Regulatory oversight

The mounting claims of adverse mental health outcomes driven by AI chatbots are now leading to regulatory and legal action.

Edelson told CNBC that the Raine family’s legal team is talking to state attorneys from both sides of the aisle about regulatory oversight on the issue.

Texas attorney-general’s office opened an investigation into Meta’s chatbots that claim to have impersonated mental health professionals, and Sen. Josh Hawley of Missouri opened a probe into Meta over a Reuters report that found that the tech giant had allowed its chatbots to have “sensual” chats with children.

Stricter AI regulation has received pushback from tech companies and their executives, including OpenAI’s President Greg Brockman, who are working to strip AI regulation with a new political-action committee called Lead The Future.

Why does it matter?

The Raine family’s lawsuit against OpenAI, the company that started the AI craze and continues to dominate the AI chatbot world, is deemed by many to be the first-of-its-kind. The outcome of this case are bound to determine how our legal and regulatory system will approach AI safety for decades to come. 



Source link

August 28, 2025 0 comments
0 FacebookTwitterPinterestEmail

Categories

  • Crypto Trends (1,098)
  • Esports (800)
  • Game Reviews (772)
  • Game Updates (906)
  • GameFi Guides (1,058)
  • Gaming Gear (960)
  • NFT Gaming (1,079)
  • Product Reviews (960)

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5
  • The 10 Most Valuable Cards

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada

    October 10, 2025
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5

    October 10, 2025
  • The 10 Most Valuable Cards

    October 10, 2025

Newsletter

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025

Newsletter

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close