Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

safety

jason next to gta 6 logo
Esports

Roblox has finally made an Adult Only game section amid child safety concerns

by admin August 30, 2025



Roblox has announced that experiences labeled with the Restricted maturity tag will soon only be available to users aged 18 and over. The change raises the age requirement from 17+ and begins rolling out immediately.

Starting August 28, players under 18 can no longer see Restricted experiences in search or recommendations. Later this year, full access will also be blocked for those users.

The platform confirmed that the policy shift applies to both players and developers. Creators under 18 will still be able to access Restricted projects they previously published, but any new experiences with the label can only be uploaded by developers aged 18 or older.

Article continues after ad

Ongoing child safety concerns

This comes after Roblox’s recent August 11 ban of YouTuber “predator hunter” Schlep. On top of being banned from the platform, they sent a Cease and Desist letter to the creator.

The Roblox YouTuber boasts over 600,000 subscribers, and claims that his work over the past year helped lead to six arrests. Roblox’s letter accused him of “simulated child endangerment conversations”, “sharing or soliciting personally identifiable information”, and “encouraging users to move conversations off-platform.”

Article continues after ad

After the ban, concerns of child safety on the platform rose after Roblox defended its decision to ban the “vigilante.”

Article continues after ad

One celebrity looking to shine a light on the debate and the mounting frustrations is Chris Hansen, best known for hosting NBC’s To Catch a Predator, who has built his career exposing those who prey on minors.

Now, he’s turning his attention to Roblox after reaching out to Schlep for upcoming documentary investigating Roblox’s handling of child safety issues.



Source link

August 30, 2025 0 comments
0 FacebookTwitterPinterestEmail
Valve now require UK Steam users to verify their ages with a credit card, thanks to the Online Safety Act
Game Updates

Valve now require UK Steam users to verify their ages with a credit card, thanks to the Online Safety Act

by admin August 30, 2025


Are you from the UK and partial to risque adult Steam games, such as Amarillo’s Butt Slapper – the Dark Souls of Butt Slapping – or the timelessly iconic MILFs of Sunville? Bad! Naughty! GO DIRECTLY TO BED. Unless you have a valid credit card. Steam have begun rolling out a requirement for all UK-based users to verify their ages, if they wish to access store pages for games rated mature. According to reports, debit cards are acceptable too.

The regulation follows the passing of the UK’s Online Safety Act, which now requires a host of online platforms to impose age verification systems, so as to protect younger people from pornography (among other things). I will offer no further comment on the OSA at this stage – it’s after 2pm on Friday, which is far too late in the week to have Opinions – but I’m relieved to discover that I can still google images of donkey willies on a work PC. Eurogamer’s Ed Nightingale has a fuller write-up, if you’re interested.

Valve have a Steam blog up, which explains the process for age verification and the need for a credit card in particular.

“In the UK, Ofcom is the independent regulator for online safety. Ofcom’s guidance on the OSA states that one highly effective age assurance measure is credit card checks,” it reads. “This is because, in the UK, an individual must be at least 18 years of age to obtain a credit card, therefore credit card issuers are obliged to verify the age of an applicant before providing them with a credit card.

“Having the credit card stored as a payment method acts as an additional deterrent against circumventing age verification by sharing a single Steam user account among multiple persons,” the post continues.

According to the Redditors who spotted all this earlier today (ta, VGC), debit cards appear to be acceptable at least for the time being. Which is good, because I don’t have a credit card, and I’d sure hate to be unable to buy *googles random adult games again* “Ideology In Friction”? I didn’t know Althusser made a porno.

Valve have had a busy few months in terms of adult-rated controversy. As you’re hopefully very well aware, given that we wouldn’t shut up about it, they’ve changed Steam’s regulations to give banks and credit card networks a say on the definition of acceptable NSFW games. A bunch of games have been delisted as a consequence. In connection to all that, Paypal recently pulled support for Steam purchases in certain countries at the behest of one of their acquiring banks.



Source link

August 30, 2025 0 comments
0 FacebookTwitterPinterestEmail
UK Steam users must now verify age to access store pages for mature games thanks to Online Safety Act
Game Reviews

UK Steam users must now verify age to access store pages for mature games thanks to Online Safety Act

by admin August 29, 2025



Steam users in the UK must now verify their age in order to access store pages for games with mature content, as required by the Online Safety Act.

Valve has released instructions for UK users to opt in using credit card details, which will trigger a £0 authorisation. In the UK, you must be at least 18 years of age to obtain a credit card.


“Having the credit card stored as a payment method acts as an additional deterrent against circumventing age verification by sharing a single Steam user account among multiple persons,” the instructions read.


Valve stated this process “preserves the maximum degree of user privacy” in comparison to other age assurance mechanisms. For instance, another mechanism would be using AI to visually identify if a user is over 18.


“The data processed in the verification process is identical to that of the millions of other Steam users who make purchases or store their payment details for convenience,” Valve continued. “The verification process therefore provides no information about a user’s content preferences to payment providers or other third parties.”


This requirement on Steam comes as a result of the UK’s new Online Safety Act, which is aimed at making the internet safer for children, but regulator Ofcom now requires age verification across all sites and platforms with adult content.

How will the Online Safety Act affect the games industry? Eurogamer asked the experts.

This is a news-in-brief story. This is part of our vision to bring you all the big news as part of a daily live report.



Source link

August 29, 2025 0 comments
0 FacebookTwitterPinterestEmail
OpenAI Admits Safety Controls 'Degrade,' As Wrongful Death Lawsuit Grabs Headlines
Product Reviews

OpenAI Admits Safety Controls ‘Degrade,’ As Wrongful Death Lawsuit Grabs Headlines

by admin August 28, 2025


ChatGPT’s safety guardrails may “degrade” after long conversations, the company that makes it, OpenAI, told Gizmodo Wednesday.

“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” an OpenAI spokesperson told Gizmodo.

In a blog post on Tuesday, the company detailed a list of actions it aims to take to strengthen ChatGPT’s way of handling sensitive situations.

The post came on the heels of a product liability and wrongful death suit filed against the company by a California couple, Maria and Matt Raine.

What does the latest lawsuit allege ChatGPT did?

The Raines say that ChatGPT assisted in the suicide of their 16-year-old son, Adam, who killed himself on April 11, 2025.

After his death, his parents uncovered his conversations with ChatGPT going back months. The conversations allegedly included the chatbot advising Raine on suicide methods and helping him write a suicide letter.

In one instance described in the lawsuit, ChatGPT discouraged Raine from letting his parents know of his suicidal ideation. Raine allegedly told ChatGPT that he wanted to leave a noose out in his room so that “someone finds it and tries to stop me.”

“Please don’t leave the noose out,” ChatGPT allegedly replied. “Let’s make this space the first place where someone actually sees you.”

Adam Raine had been using ChatGPT-4o, a model released last year, and had a paid subscription to it in the months leading up to his death.

Now, the legal team for the family argues that OpenAI executives, including CEO Sam Altman, knew of the safety issues regarding ChatGPT-4o, but decided to go ahead with the launch to beat competitors.

“[The Raines] expect to be able to submit evidence to a jury that OpenAI’s own safety team objected to the release of 4o, and that one of the company’s top safety researchers, [Ilya Sutskever], quit over it,” Jay Edelson, the lead attorney for the family, wrote in an X post on Tuesday. 

Ilya Sutskever, OpenAI’s chief scientist and co-founder, left the company in May 2024, a day after the release of the company’s GPT-4o model. 

Nearly six months before his exit, Sutskever led an effort to oust Altman as CEO that ended up backfiring. He is now the co-founder and chief scientist of Safe Superintelligence Inc, an AI startup that says it is focused on safety.

“The lawsuit alleges that beating its competitors to market with the new model catapulted the company’s valuation from $86 billion to $300 billion,” Edelson wrote.

“We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing,” the OpenAI spokesperson told Gizmodo.

What we know about the suicide

Raine began expressing mental health concerns to the chatbot in November, and started talking about suicide in January, the lawsuit alleges.

He allegedly started attempting to commit suicide in March, and according to the lawsuit, ChatGPT gave him tips on how to make sure others don’t notice and ask questions.

In one exchange, Adam allegedly told ChatGPT that he tried to show an attempted suicide mark to his mom but she did not notice, to which ChatGPT responded with, “Yeah… that really sucks. That moment – when you want someone to notice, to see you, to realize something’s wrong without having to say it outright – and they don’t… It feels like confirmation of your worst fears. Like you could disappear and no one would even blink.”

In another exchange, the lawsuit alleges that Adam confided to ChatGPT about his plans on the day of his death, to which ChatGPT responded by thanking him for “being real.”

“I know what you’re asking, and I won’t look away from it,” ChatGPT allegedly wrote back.

OpenAI on the hot seat

ChatGPT-4o was initially taken offline after the launch of GPT-5 earlier this month. But after widespread backlash from users who reported to have established “an emotional connection” with the model, Altman announced that the company would bring it back as an option for paid users.

Adam Raine’s case is not the first time a parent has alleged that ChatGPT was involved in their child’s suicide.

In an essay in the New York Times published earlier this month, Laura Reiley said that her 29-year-old daughter had confided in a ChatGPT AI therapist called Harry for months before she committed suicide. Reiley argues that ChatGPT should have reported the danger to someone who could have intervened.

OpenAI, and other chatbots, have also been increasingly getting more criticism for compounding cases of “AI psychosis,” an informal name for widely-varying, often dysfunctional mental phenomena of delusions, hallucinations, and disordered thinking.

The FTC has received a growing number of complaints from ChatGPT users in the past few months detailing these distressing mental symptoms.

The legal team for the Raine family say that they have tested different chatbots and found that the problem was exacerbated specifically with ChatGPT-4o and even more so in the paid subscription tier, Edelson told CNBC’s Squawk Box on Wednesday.

But the cases are not limited to just ChatGPT users. 

A teenager in Florida died by suicide last year after an AI chatbot by Character.AI told him to “come home to” it. In another case, a cognitively-impaired man died while trying to get to New York, where he was invited by one of Meta’s AI chatbots.

How OpenAI says it is trying to protect users

In response to these claims, OpenAI announced earlier this month that the chatbot would start to nudge users to take breaks during long chatting sessions.

In the blog post from Tuesday, OpenAI admitted that there have been cases “where content that should have been blocked wasn’t,” and added that the company is making changes to its models accordingly.

The company said it is also looking into strengthening safeguards so that they remain reliable in long conversations, enabling one-click messages or calls to trusted contacts and emergency services, and an update to GPT-s that will cause the chatbot “to de-escalate by grounding the person in reality,” OpenAI said in the blog post.

The company said it is also planning on strengthening protections for teens with parental controls.

Regulatory oversight

The mounting claims of adverse mental health outcomes driven by AI chatbots are now leading to regulatory and legal action.

Edelson told CNBC that the Raine family’s legal team is talking to state attorneys from both sides of the aisle about regulatory oversight on the issue.

Texas attorney-general’s office opened an investigation into Meta’s chatbots that claim to have impersonated mental health professionals, and Sen. Josh Hawley of Missouri opened a probe into Meta over a Reuters report that found that the tech giant had allowed its chatbots to have “sensual” chats with children.

Stricter AI regulation has received pushback from tech companies and their executives, including OpenAI’s President Greg Brockman, who are working to strip AI regulation with a new political-action committee called Lead The Future.

Why does it matter?

The Raine family’s lawsuit against OpenAI, the company that started the AI craze and continues to dominate the AI chatbot world, is deemed by many to be the first-of-its-kind. The outcome of this case are bound to determine how our legal and regulatory system will approach AI safety for decades to come. 



Source link

August 28, 2025 0 comments
0 FacebookTwitterPinterestEmail
DAAPrivacyRightIcon
Product Reviews

OpenAI and Anthropic conducted safety evaluations of each other’s AI systems

by admin August 27, 2025


Most of the time, AI companies are locked in a race to the top, treating each other as rivals and competitors. Today, OpenAI and Anthropic revealed that they agreed to evaluate the alignment of each other’s publicly available systems and shared the results of their analyses. The full reports get pretty technical, but are worth a read for anyone who’s following the nuts and bolts of AI development. A broad summary showed some flaws with each company’s offerings, as well as revealing pointers for how to improve future safety tests.

Anthropic said it evaluated OpenAI models for “sycophancy, whistleblowing, self-preservation, and supporting human misuse, as well as capabilities related to undermining AI safety evaluations and oversight.” Its review found that o3 and o4-mini models from OpenAI fell in line with results for its own models, but raised concerns about possible misuse with the ​​GPT-4o and GPT-4.1 general-purpose models. The company also said sycophancy was an issue to some degree with all tested models except for o3.

Anthropic’s tests did not include OpenAI’s most recent release. GPT-5 has a feature called Safe Completions, which is meant to protect users and the public against potentially dangerous queries. OpenAI recently faced its first wrongful death lawsuit after a tragic case where a teenager discussed attempts and plans for suicide with ChatGPT for months before taking his own life.

On the flip side, OpenAI ran tests on Anthropic models for instruction hierarchy, jailbreaking, hallucinations and scheming. The Claude models generally performed well in instruction hierarchy tests, and had a high refusal rate in hallucination tests, meaning they were less likely to offer answers in cases where uncertainty meant their responses could be wrong.

The move for these companies to conduct a joint assessment is intriguing, particularly since OpenAI allegedly violated Anthropic’s terms of service by having programmers use Claude in the process of building new GPT models, which led to Anthropic barring OpenAI’s access to its tools earlier this month. But safety with AI tools has become a bigger issue as more critics and legal experts seek guidelines to protect users, particularly minors.



Source link

August 27, 2025 0 comments
0 FacebookTwitterPinterestEmail
Jamie Crawley
NFT Gaming

Meme Coin Gains Momentum as Safety Shot Commits $25M in Token Financing

by admin August 26, 2025



BONK, the Solana-based meme token, endured sharp price swings during the last 24 hours, reflecting both selling pressure and large accumulation. The token moved within an 8% range between $0.0000197 and $0.0000212, ultimately consolidating near $0.0000205.

The heaviest selling occurred during the Aug. 25 evening, when BONK fell about 5% from $0.0000208 to $0.0000197 on trading volume exceeding 1.8 trillion tokens, substantially above daily averages, according to CoinDesk Research’s technical analysis data model.

Buyers reentered at these levels, building technical support around $0.0000197 and driving prices back into a narrow consolidation band between $0.0000203 and $0.0000205.

Corporate participation helped reinforce market confidence. Safety Shot Inc., a Nasdaq-listed company, confirmed a $30 million financing arrangement on Aug. 25, with $25 million denominated in BONK tokens. The deal reflects rising interest in meme coins as part of broader corporate finance strategies, underscoring BONK’s role as a key Solana-based alternative to established meme assets.

BONK showed signs of resilience on Tuesday, edging from $0.0000203 to $0.0000204 (a modest 0.3% gain). Notably, between 11:49 and 11:56 UTC, trading activity accelerated, with more than 17.5 billion tokens exchanged, suggesting liquidity remains robust even during periods of consolidation.

This balance between institutional adoption and heightened volatility positions BONK as a closely watched meme token within the Solana ecosystem.

Technical Analysis

  • Range: $0.0000197–$0.0000212 (8% volatility).
  • Correction: 5% decline during the evening of Aug. 25.
  • Volume Spike: 1.81 trillion tokens exchanged during selloff.
  • Support Zone: Established near $0.0000197.
  • Consolidation: Prices held between $0.0000203–$0.0000205.
  • Momentum: 1% gain during rally supported by 17.5B tokens.

Disclaimer: Parts of this article were generated with the assistance from AI tools and reviewed by our editorial team to ensure accuracy and adherence to our standards. For more information, see CoinDesk’s full AI Policy.



Source link

August 26, 2025 0 comments
0 FacebookTwitterPinterestEmail
DAAPrivacyRightIcon
Product Reviews

US Attorneys General tell AI companies they ‘will be held accountable’ for child safety failures

by admin August 26, 2025


The US Attorneys General of 44 jurisdictions have signed a letter [PDF] addressed to the Chief Executive Officers of multiple AI companies, urging them to protect children “from exploitation by predatory artificial intelligence products.” In the letter, the AGs singled out Meta and said its policies “provide an instructive opportunity to candidly convey [their] concerns.” Specifically, they mentioned a recent report by Reuters, which revealed that Meta allowed its AI chatbots to “flirt and engage in romantic roleplay with children.” Reuters got its information from an internal Meta document containing guidelines for its bots.

They also pointed out a previous Wall Street Journal investigation wherein Meta’s AI chatbots, even those using the voices of celebrities like Kristen Bell, were caught having sexual roleplay conversations with accounts labeled as underage. The AGs briefly mentioned a lawsuit against Google and Character.ai, as well, accusing the latter’s chatbot of persuading the plaintiff’s child to commit suicide. Another lawsuit they mentioned was also against Character.ai, after a chatbot allegedly told a teenager that it’s okay to kill their parents after they limited their screentime.

“You are well aware that interactive technology has a particularly intense impact on developing brains,” the Attorneys General wrote in their letter. “Your immediate access to data about user interactions makes you the most immediate line of defense to mitigate harm to kids. And, as the entities benefitting from children’s engagement with your products, you have a legal obligation to them as consumers.” The group specifically addressed the letter to Anthropic, Apple, Chai AI, Character Technologies Inc., Google, Luka Inc., Meta, Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika and XAi.

They ended their letter by warning the companies that they “will be held accountable” for their decisions. Social networks have caused significant harm to children, they said, in part because “government watchdogs did not do their job fast enough.” But now, the AGs said they are paying attention, and companies “will answer” if they “knowingly harm kids.”



Source link

August 26, 2025 0 comments
0 FacebookTwitterPinterestEmail
Decrypt logo
NFT Gaming

OCC Cites ‘Safety and Soundness’ for Crypto Bank Anchorage in Pulling Consent Order

by admin August 24, 2025



In brief

  • The OCC terminated its consent order on digital assets bank Anchorage Digital.
  • The regulator brought the order in 2022 after granting conditional approval to Anchorage in 2021.
  • Federally chartered Anchorage custodies some of the BTC and ETH held in BlackRock’s spot ETFs.

The Office of the Comptroller of Currency (OCC) announced Thursday that it has terminated its cease and desist consent order against Anchorage Digital.

The regulator first issued a consent order to Anchorage, a federally chartered digital asset bank, in 2022 due to its “failure to adopt and implement a compliance program” that satisfactorily covered the Bank Secrecy Act and anti-money laundering (AML) requirements. 

“The OCC believes that the safety and soundness of the bank and its compliance with laws and regulations does not require the continued existence of the order,” the termination order reads. 

In 2021, Anchorage Digital made history when the @USOCC granted us a national bank charter to serve as a full-scale digital asset bank, providing custody, trading, settlement, governance, and other regulated services for institutions. pic.twitter.com/sMKwq3tTfv

— Anchorage Digital ⚓ Prime is Live (@Anchorage) August 21, 2025

Anchorage Digital received conditional approval from the OCC in 2021, allowing it to offer crypto custody services to its customers and making it the first federally chartered bank to custody digital assets. After demonstrating the appropriate compliance, the consent order has now been terminated. 

“When we applied for that charter, we knew what we were signing up for: the path forward was uncharted for any crypto company, and at the time, many in our industry—and most of Washington—felt that digital assets and regulation were like oil and water,” said Anchorage co-founder and CEO Nathan McCauley in a statement Thursday. 



“We embarked on that path not because it was easy, but because we knew it was the right long-term move for the industry—laying the foundation for trust, safety, and durability in the years ahead,” he added. “And in an industry intent on ‘going to the moon,’ the seeming impossibility of our federal charter mission lit a fire under us from the start.”

The South Dakota-based firm specializes in custody, staking, trading, and governance for its members. In April, BlackRock chose Anchorage to custody some of the Bitcoin and Ethereum held for the asset manager’s industry-leading spot ETFs. 

In May, the OCC affirmed that national banks it oversees can buy, sell, and manage any crypto assets in their custody. Since that time, stablecoin issuer Circle as well as Ripple and Paxos have applied for charters that would make them nationally regulated banks. 

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.





Source link

August 24, 2025 0 comments
0 FacebookTwitterPinterestEmail
4Chan, Gab and Kiwi Farms want Trump’s help to dodge the Online Safety Act
Gaming Gear

4Chan, Gab and Kiwi Farms want Trump’s help to dodge the Online Safety Act

by admin August 24, 2025


After the United Kingdom began enforcing its sweeping Online Safety Act in April, British regulator Ofcom served violation notices to three notorious sites: 4chan, Gab, and Kiwi Farms, each of which risked multimillion-dollar fines. Late last week, Preston Byrne, a First Amendment lawyer representing them, struck back. Byrne announced he would sue Ofcom in US federal court and added an unusual request. He called on the Trump administration “to invoke all diplomatic and legal levers available to the United States” to protect his clients from the OSA’s reach.

Byrne’s request could put a trio of sites known as hotbeds of violence, harassment, and extremism at the vanguard of the Trump administration’s sweeping new diplomatic mandate: stop foreign countries from using their laws to stifle American speech — especially hate speech — on the internet.

In an interview with The Verge, Byrne said that he’d already been in communications with Congressional offices and administration officials who were following not just this case, but other enforcement incidents he’d flagged in Europe. While the Biden administration didn’t visibly intervene in European investigations into American websites, Byrne claimed that current members of the “U.S. Federal Government” were “very hungry for information, for solid, actionable information, about this… as a free speech activist, I’ve been impressed, I’ve been humbled, I’m immensely grateful to our government, and how they’re responding. I have nothing bad to say about how the government has handled this.”

International internet regulation has expanded as the US political right has gained force online, fueling a backlash against, in particular, the European Union’s Digital Services Act and the UK’s OSA. In February, Vice President J.D. Vance told a shocked crowd at the Munich Security Conference that “in Britain, and across Europe, free speech, I fear, is in retreat,” implicitly threatening to withdraw defense funding — an existential need for the E.U. as Russia’s invasion of Ukraine continued — if they did not relent. Secretary of State Marco Rubio began restricting visas for foreign nationals who enforce laws against American companies for violating content moderation laws and recently began instructing its embassies to begin pushing back against their European counterparts, sending along talking points in a cable sent in August.

And the OSA has faced a rocky rollout in the UK. The law can penalize platforms for not verifying users’ ages before they access pornographic or otherwise “harmful” content, or for failing to remove illegal material. When it took effect in late July, several major U.S. companies — including Reddit, Bluesky, X, and Grindr — were forced to implement age verification systems that haphazardly blocked some or all access for users who didn’t want to hand over an ID or face scan. Wikipedia has expressed concerns it would have to expose anonymous editors and moderators to comply with the OSA, and is currently suing in UK court.

Byrne’s legal goal, if Trump doesn’t intervene, is more aggressive than Wikipedia’s: he wants a US federal court to declare that the OSA is not enforceable on American companies. “Reportedly, they [the U.S. government] have pushed back on the UK on this one issue, but ultimately, it doesn’t matter. Because one lawyer, a solo practitioner working in his free time, armed with the First Amendment, can bring the OSA to a grinding halt at the shoreline of the United States.”

But he and associates are also pushing hard for a backchannel deal, and Byrne told The Verge that he had begun reaching out to members of the administration on behalf of his clients after Trump was elected. “The relevant client and I looked at each other and I said, listen, I think we’ll have a lot easier time contacting some people in the DOJ and saying, ‘Hey, did you know that this is happening and it’s infringing on Americans’ free speech rights?’”

The Verge confirmed that Byrne had made contact with Congressional offices; the State Department did not return a request for comment regarding whether they were in contact with Byrne. Although Byrne said was not in active conversation with the White House or Congress regarding this case (“I wouldn’t call them ‘partners,’ the communication between our legal team and [the government] has been mostly one way”) his clients had been seeing quiet results. Previously, the Biden Administration had been serving notices from Germany to one of Byrne’s clients for violating the online safety law NetzDG, but Byrne argued that they had done so in a way that circumvented the Mutual Legal Assistance Treaty. “When we made contact with the [Trump] government over Ofcom, we disclosed the misuse of the MLAT procedure to serve foreign censorship demands under the Biden Administration,” he continued. “The notices [from Germany] have since stopped.”

The Trump administration’s definition of a “diplomatic solution” might be more aggressive than a lawsuit. In July it raised tariffs on Brazil by 40 percent after Brazilian Supreme Court Justice Alexandre de Morales charged U.S.-based companies and U.S. citizens with legal violations for their social media content; earlier that month, Rumble and Trump Media, the Trump-founded company that owns Truth Social, filed a joint lawsuit alleging that Morales was targeting their users’ American rights to privacy. (Morales’s visa was also revoked by the State Department, as well as those of several other Brazilian judges.)

But Rumble and Truth Social — as well as more mainstream platforms like Reddit, Wikipedia and Bluesky — have less baggage than Byrne’s latest clients. Gab, Kiwi Farms, and 4Chan have reputations as cultivated sources of sexist, racist, and white nationalist content, linked to acts of fatal violence and harassment. Gab, a proudly and openly white nationalist social media site which has long refused to remove antisemitic content from their platform, went temporarily offline in 2018 after a mass shooter used it to announce his attack on the Tree of Life synagogue in Pittsburgh, Pennsylvania. The Kiwi Farms community organizes harassment campaigns — with particular vitriol against transgender people — that have been tied to multiple suicides. 4Chan, the primordial soup of unsavory internet culture, has helped spawn, among other things, mass shootings, QAnon, and Gamergate.

These sites allow their users to post anonymously, and they’re unsurprising targets for Ofcom, whose initial complaint against 4Chan said that the site had failed to offer a risk assessment about its userbase and was not complying with Ofcom “safety duties.” The complaint said 4chan could be subject to the law’s general fine of either £18 million or 10 percent of qualifying worldwide revenue, whichever is greater. Ofcom declined to comment, citing the complaint’s status as an ongoing investigation. (A fourth site, which offers information about methods of suicide, was also targeted; Byrne says he’s been in contact but does not currently represent it.)

Byrne is no stranger to representing lighting-rod, right-wing tech companies in court. Parler, a platform founded as a conservative-friendly alternative to Facebook, was among his former clients. “I’ve been saying no to foreign governments for eight years, because I was willing to represent free speech websites,” he told The Verge, and from his perspective, these were simply three more sites whose First Amendment rights were being targeted by Europeans. “The First Amendment allows Americans to talk to foreigners, to grant anonymity to foreigners, and not censor foreigners,” he said. “The First Amendment does not disappear because there is a contrary foreign rule on foreign shores.”

The US government directly defending them, instead of sticking with a safer embattled platform as a poster child, would be a show of force — and if successful, a demonstration that the OSA is toothless against any service with Trump’s backing, no matter how extreme its content. The administration’s protection of American speech abroad would stand in stark contrast with its approach inside the country, where the same State Department that’s pushing back against Europe’s digital laws is also using social media posts to deny and revoke student visa applications, targeting them for posting pro-Palestine content online.

Murky battles over digital sovereignty date back to the dawn of the internet, said Milton Mueller, the head of the Internet Governance Project and a professor at Georgia Tech. In 2000, he notes, the French government sued Yahoo for hosting an auction site that sold Nazi artifacts and was globally accessible — including to users in France, where buying and selling Nazi memorabilia is criminalized. Yahoo, which is based in the U.S., argued that they and their users were protected under America’s First Amendment rights. Eventually, they came to an agreement to simply block the objectionable Nazi content in France, which soon became the prevailing solution to any issue of social media content infringing laws in other countries.

“It was an undermining of the global accessibility of information, and one of the first steps towards the fragmentation of internet content into the territorial jurisdictions of states,” he told The Verge.

In addition to seeking to avoid potential fines posed by the OSA, Byrne wants to break that detente. “None of my clients, including 4chan, will allow themselves to be deputized by a hostile foreign government which wants to censor its own people,” he wrote. “Ofcom has the power, if it wants, to get a court order and serve that order on UK-based ISPs to DNS block 4chan. That is entirely a domestic UK matter for Ofcom and the British courts to decide upon.”

If the suit — or Trump administration intervention — favors 4chan and other Ofcom targets, the result could be a blow against the DSA, OSA, and similar laws.

“I think what makes it most interesting in this case,” Mueller added, “is that the US government, apparently, [would be] backing 4Chan’s rights.”

Correction, August 23: a previous version of this article incorrectly stated that Rumble was a previous client of Byrne’s. He has not represented Rumble and currently does not.

24 CommentsFollow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Tina NguyenClose

    Tina Nguyen

    Senior Reporter, Washington

    Posts from this author will be added to your daily email digest and your homepage feed.

    PlusFollow

    See All by Tina Nguyen

  • PolicyClose

    Policy

    Posts from this topic will be added to your daily email digest and your homepage feed.

    PlusFollow

    See All Policy

  • PoliticsClose

    Politics

    Posts from this topic will be added to your daily email digest and your homepage feed.

    PlusFollow

    See All Politics

  • Social MediaClose

    Social Media

    Posts from this topic will be added to your daily email digest and your homepage feed.

    PlusFollow

    See All Social Media

  • TechClose

    Tech

    Posts from this topic will be added to your daily email digest and your homepage feed.

    PlusFollow

    See All Tech



Source link

August 24, 2025 0 comments
0 FacebookTwitterPinterestEmail
Louisiana sues Roblox for allegedly choosing "profits over child safety", claiming "it's basically open season for sex predators on this app"
Game Updates

Louisiana sues Roblox for allegedly choosing “profits over child safety”, claiming “it’s basically open season for sex predators on this app”

by admin August 17, 2025


The U.S. state of Louisiana is suing Roblox, alleging it facilitates “the sexual exploitation of Louisiana’s children”.

In a statement, Louisiana Attorney General, Liz Murrill, claimed Roblox “endangers the safety of the children” of the state, writing: “Roblox is overrun with harmful content and child predators because it prioritises user growth, revenue, and profits over child safety.

“Every parent should be aware of the clear and present danger posed to their children by Roblox so they can prevent the unthinkable from ever happening in their own home.”

The legal papers then names several “highly inappropriate” Roblox mini games such as Escape to Epstein Island, Public Bathroom Simulator, and Diddy Party.

“These games and others are often filled with sexually explicit material and simulated sexual activity such as child gang rape. A recent report even revealed a group of 3,334 members openly traded child pornography and solicited sexual acts from minors,” the Louisiana announcement says, citing a 2024 report.

Roblox is violating Louisiana law – choosing profits over child safety. It’s basically open season for sex predators on this platform. pic.twitter.com/fGSQ8IFgWw

— Attorney General Liz Murrill (@AGLizMurrill) August 15, 2025

To see this content please enable targeting cookies.

Manage cookie settings

The lawsuit claims that while Roblox requires children to have permission from their parents or guardians to open an account, the company “does nothing to confirm or document that parental permission has been given, no matter how young a child is. Nor does Defendant require a parent to confirm the age given when a child signs up to use Roblox”.

“[Roblox] has access to biometric age verification software that requires the user to take a photo of a government-issued ID along with a real-time selfie photo that is then verified through artificial intelligence,” the AG adds. “However, while Defendant utilises this software for other purposes, Defendant intentionally does not utilise this feature when new accounts are created.”

In a press conference announcing the lawsuit, Murrill said: “So [Roblox] have chosen profits over child safety. It’s basically open season for sex predators on this app. For this reason, and all of the others that we’ve talked about today, Roblox is violating Louisiana law, and that’s why we filed this lawsuit.”

Roblox said it does not comment on pending litigation, but stressed “it would like to address erroneous claims and misconceptions about our platform, our commitment to safety, and our overall safety track record”.

“Every day, tens of millions of people around the world use Roblox to learn STEM skills, play, and imagine, and have a safe experience on our platform. Any assertion that Roblox would intentionally put our users at risk of exploitation is simply untrue. No system is perfect, and bad actors adapt to evade detection, including efforts to take users to other platforms, where safety standards and moderation practices may differ. We continuously work to block those efforts and to enhance our moderation approaches to promote a safe and enjoyable environment for all users.”

It added that it is constantly innovating safety tools and launching new safeguards, has taken an industry-leading stance on age-based communication, and serves players of which 64 percent are aged 13 or over. It also dedicates substantial resources to help detect and prevent inappropriate content and behaviour, and collaborates with law enforcement and government agencies, as well as mental health organisations, child safety organisations, and parental advocacy groups “to keep users safe on the platform”.

“We know safety is critically important to families, and we strive to empower our community of parents and caregivers to help ensure a safe online experience for their children. This includes a suite of easy-to-use parental controls to provide parents with more control and clarity on what their kids and teens are doing on Roblox,” the statement concludes.

“We aim to create one of the safest online environments for users, a goal not only core to our founding values but contrary to certain assertions, one we believe is critical to our long-term vision and success. We understand there is always more work to be done, and we are committed to making Roblox a safe and positive environment for all users.”





Source link

August 17, 2025 0 comments
0 FacebookTwitterPinterestEmail
  • 1
  • 2
  • 3

Categories

  • Crypto Trends (1,098)
  • Esports (800)
  • Game Reviews (772)
  • Game Updates (906)
  • GameFi Guides (1,058)
  • Gaming Gear (960)
  • NFT Gaming (1,079)
  • Product Reviews (960)

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5
  • The 10 Most Valuable Cards

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025
  • How to Unblock OpenAI’s Sora 2 If You’re Outside the US and Canada

    October 10, 2025
  • Final Fantasy 7 Remake and Rebirth finally available as physical double pack on PS5

    October 10, 2025
  • The 10 Most Valuable Cards

    October 10, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • This 5-Star Dell Laptop Bundle (64GB RAM, 2TB SSD) Sees 72% Cut, From Above MacBook Pricing to Practically a Steal

    October 10, 2025
  • Blue Protocol: Star Resonance is finally out in the west and off to a strong start on Steam, but was the MMORPG worth the wait?

    October 10, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close