Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop
Tag:

Anthropic

DAAPrivacyRightIcon
Product Reviews

Anthropic reaches a settlement over authors’ class-action piracy lawsuit

by admin August 26, 2025


Anthropic has settled a class-action lawsuit brought by a group of authors for an undisclosed sum. The move means the company will avoid a potentially more costly ruling if the case regarding its use of copyright materials to train artificial intelligence tools had moved forward.

In June, Judge William Alsup handed down a mixed result in the case, ruling that Anthropic’s move to train LLMs on copyrighted materials constituted fair use. However the company’s illegal and unpaid acquisition of those copyrighted materials was deemed available for the authors to pursue as a piracy case. With statutory damages for piracy beginning at $750 per infringed work and a library of pirated works estimated to number about 7 million, Anthropic could have been on the hook for billions of dollars.

Litigation around AI and copyright is still shaking out, with no clear precedents emerging yet. This also isn’t Anthropic’s first foray into negotiating with creatives after using their work; it was sued by members of the music industry in 2023 and reached a partial resolution earlier this year. Plus, the details of Anthropic’s settlement also have yet to be revealed. Depending on the number of authors who make a claim and the amount Anthropic agreed to pay out, either side could wind up feeling like the winner after the dust settles.



Source link

August 26, 2025 0 comments
0 FacebookTwitterPinterestEmail
Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors
Gaming Gear

Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors

by admin August 26, 2025


Anthropic has reached a preliminary settlement in a class action lawsuit brought by a group of prominent authors, marking a major turn in of the most significant ongoing AI copyright lawsuits in history. The move will allow Anthropic to avoid what may have been a financially devastating outcome in court.

The settlement agreement is expected to be finalized September 3, with more details to follow, according to a legal filing published on Tuesday. Lawyers for the plaintiffs did not immediately respond to requests for comment. Anthropic declined to comment.

In 2024, three book writers, Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, sued Anthropic, alleging the startup illegally used their work to train its artificial intelligence models. In June, California district court judge William Alsup issued a summary judgement in Bartz v. Anthropic largely siding with Anthropic, finding that the company’s usage of the books was “fair use,” and thus legal.

But the judge ruled that the manner in which Anthropic had acquired some of the works, by downloading them through so-called “shadow libraries,” including a notorious site called LibGen, constituted piracy. Alsup ruled that the book authors could still take Anthropic to trial in a class action suit for pirating their works; the legal showdown was slated to begin this December.

Statutory damages for this kind of piracy start at $750 per infringed work, according to US copyright law. Because the library of books amassed by Anthropic was thought to contain approximately seven million works, the AI company was potentially facing court-imposed penalties amounting to billions, or even over $1 trillion dollars.

“It’s a stunning turn of events, given how Anthropic was fighting tooth and nail in two courts in this case. And the company recently hired a new trial team,” says Edward Lee, a law professor at Santa Clara University who closely follows AI copyright litigation. “But they had few defenses at trial, given how Judge Alsup ruled. So Anthropic was starting at the risk of statutory damages in ‘doomsday’ amounts.”

Most authors who may have been part of the class action lawsuit were just starting to receive notice that they qualified to participate. The Authors Guild, a trade group representing professional writers, sent out a notice alerting authors that they might be eligible earlier this month, and lawyers for the plaintiffs were scheduled to submit a “list of affected works” to the court on September 1. This means that many of these writers were not privy to the negotiations that took place.

“The big question is whether there is a significant revolt from within the author class after the settlement terms are unveiled,” says James Grimmelmann, a professor of digital and internet law at Cornell University. “That will be a very important barometer of where copyright owner sentiment stands.”

Anthropic is still facing a number of other copyright-related legal challenges. One of the most high-profile disputes involves a group of major record labels, including Universal Music Group, which allege that the company illegally trained its AI programs on copyrighted lyrics. The plaintiffs recently filed to amend their case to allege that Anthropic had used the peer-to-peer file sharing service BitTorrent to download songs illegally.

Settlements don’t set legal precedent, but the details of this case will likely still be watched closely as dozens of other high-profile AI copyright cases continue to wind through the courts.



Source link

August 26, 2025 0 comments
0 FacebookTwitterPinterestEmail
A bomb and crosshair on a keyboard.
Gaming Gear

Anthropic will nuke your attempt to use AI to build a nuke

by admin August 23, 2025



  • Anthropic has developed an AI-powered tool that detects and blocks attempts to ask AI chatbots for nuclear weapons design
  • The company worked with the U.S. Department of Energy to ensure the AI could identify such attempts
  • Anthropic claims it spots dangerous nuclear-related prompts with 96% accuracy and has already proven effective on Claude

If you’re the type of person who asks Claude how to make a sandwich, you’re fine. If you’re the type of person who asks the AI chatbot how to build a nuclear bomb, you’ll not only fail to get any blueprints, you might also face some pointed questions of your own. That’s thanks to Anthropic’s newly deployed detector of problematic nuclear prompts.

Like other systems for spotting queries Claude shouldn’t respond to, the new classifier scans user conversations, in this case flagging any that veer into “how to build a nuclear weapon” territory. Anthropic built the classification feature in a partnership with the U.S. Department of Energy’s National Nuclear Security Administration (NNSA), giving it all the information it needs to determine whether someone is just asking about how such bombs work or if they’re looking for blueprints. It’s performed with 96% accuracy in tests.

Though it might seem over-the-top, Anthropic sees the issue as more than merely hypothetical. The chance that powerful AI models may have access to sensitive technical documents and could pass along a guide to building something like a nuclear bomb worries federal security agencies. Even if Claude and other AI chatbots block the most obvious attempts, innocent-seeming questions could in fact be veiled attempts at crowdsourcing weapons design. The new AI chatbot generations might help even if it’s not what their developers intend.


You may like

The classifier works by drawing a distinction between benign nuclear content, asking about nuclear propulsion, for instance, and the kind of content that could be turned to malicious use. Human moderators might struggle to keep up with any gray areas at the scale AI chatbots operate, but with proper training, Anthropic and the NNSA believe the AI could police itself. Anthropic claims its classifier is already catching real-world misuse attempts in conversations with Claude.

Nuclear AI safety

Nuclear weapons in particular represent a uniquely tricky problem, according to Anthropic and its partners at the DoE. The same foundational knowledge that powers legitimate reactor science can, if slightly twisted, provide the blueprint for annihilation. The arrangement between Anthropic and the NNSA could catch deliberate and accidental disclosures, and set up a standard to prevent AI from being used to help make other weapons, too. Anthropic plans to share its approach with the Frontier Model Forum AI safety consortium.

The narrowly tailored filter is aimed at making sure users can still learn about nuclear science and related topics. You still get to ask about how nuclear medicine works, or whether thorium is a safer fuel than uranium.

What the classifier attempts to circumvent are attempts to turn your home into a bomb lab with a few clever prompts. Normally, it would be questionable if an AI company could thread that needle, but the expertise of the NNSA should make the classifier different from a generic content moderation system. It understands the difference between “explain fission” and “give me a step-by-step plan for uranium enrichment using garage supplies.”

Sign up for breaking news, reviews, opinion, top tech deals, and more.

This doesn’t mean Claude was previously helping users design bombs. But it could help forestall any attempt to do so. Stick to asking about the way radiation can cure diseases or ask for creative sandwich ideas, not bomb blueprints.

You might also like



Source link

August 23, 2025 0 comments
0 FacebookTwitterPinterestEmail

Categories

  • Crypto Trends (216)
  • Esports (158)
  • Game Reviews (130)
  • Game Updates (179)
  • GameFi Guides (211)
  • Gaming Gear (190)
  • NFT Gaming (218)
  • Product Reviews (190)

Recent Posts

  • SEC Crypto Task Force And Kraken Discuss Asset Tokenization
  • 2XKO’s next character has been revealed early on the Riot Games support website
  • Atari now owns the rights to five Ubisoft games: Cold Fear, I Am Alive, Child of Eden, Grow Home, and Grow Up
  • Ubisoft CEO Yves Guillemot summoned to French Court in relation to previous harassment trial against former employees
  • U.S. CFTC, a Top Crypto Watchdog, Is About to Shrink Commission to Only One Member

Recent Posts

  • SEC Crypto Task Force And Kraken Discuss Asset Tokenization

    August 27, 2025
  • 2XKO’s next character has been revealed early on the Riot Games support website

    August 27, 2025
  • Atari now owns the rights to five Ubisoft games: Cold Fear, I Am Alive, Child of Eden, Grow Home, and Grow Up

    August 27, 2025
  • Ubisoft CEO Yves Guillemot summoned to French Court in relation to previous harassment trial against former employees

    August 27, 2025
  • U.S. CFTC, a Top Crypto Watchdog, Is About to Shrink Commission to Only One Member

    August 27, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

About me

Welcome to Laughinghyena.io, your ultimate destination for the latest in blockchain gaming and gaming products. We’re passionate about the future of gaming, where decentralized technology empowers players to own, trade, and thrive in virtual worlds.

Recent Posts

  • SEC Crypto Task Force And Kraken Discuss Asset Tokenization

    August 27, 2025
  • 2XKO’s next character has been revealed early on the Riot Games support website

    August 27, 2025

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 laughinghyena- All Right Reserved. Designed and Developed by Pro


Back To Top
Laughing Hyena
  • Home
  • Hyena Games
  • Esports
  • NFT Gaming
  • Crypto Trends
  • Game Reviews
  • Game Updates
  • GameFi Guides
  • Shop

Shopping Cart

Close

No products in the cart.

Close