Federal Judge: Anthropic Acted Legally With AI Book Training


A federal judge ruled for the first time that it was legal for $61.5 billion AI startup, Anthropic, to train its AI model on copyrighted books without compensating or crediting the authors.

U.S. District Judge William Alsup of San Francisco stated in a ruling filed on Monday that Anthropic’s use of copyrighted, published books to train its AI model was “fair use” under U.S. copyright law because it was “exceedingly transformative.” Alsup compared the situation to a human reader learning how to be a writer by reading books, for the purpose of creating a new work.

“Like any reader aspiring to be a writer, Anthropic’s [AI] trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,” Alsup wrote.

According to the ruling, although Anthropic’s use of copyrighted books as training material for Claude was fair use, the court will hold a trial on pirated books used to create Anthropic’s central library and determine the resulting damages.

Related: ‘Extraordinarily Expensive’: Getty Images Is Pouring Millions of Dollars Into One AI Lawsuit, CEO Says

The ruling, the first time that a federal judge has sided with tech companies over creatives in an AI copyright lawsuit, creates a precedent for courts to favor AI companies over individuals in AI copyright disputes.

These copyright lawsuits rely on how a judge interprets the fair use doctrine, a concept in copyright law that permits the use of copyrighted material without obtaining permission from the copyright holder. Fair use rulings depend on how different the end work is from the original, what the end work is being used for, and if it is being replicated for commercial gain.

The plaintiffs in the class action case, Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, are all authors who allege that Anthropic used their work to train its chatbot without their permission. They filed the initial complaint, Bartz v. Anthropic, in August 2024, alleging that Anthropic had violated copyright law by pirating books and replicating them to train its AI chatbot.

The ruling details that Anthropic downloaded millions of copyrighted books for free from pirate sites. The startup also bought print copies of copyrighted books, some of which it already had in its pirated library. Employees tore off the bindings of these books, cut down the pages, scanned them, and stored them in digital files to add to a central digital library.

From this central library, Anthropic selected different groupings of digitized books to train its AI chatbot, Claude, the company’s primary revenue driver.

Related: ‘Bottomless Pit of Plagiarism’: Disney, Universal File the First Major Hollywood Lawsuit Against an AI Startup

The judge ruled that because Claude’s output was “transformative,” Anthropic was permitted to use the copyrighted works under the fair use doctrine. However, Anthropic still has to go to trial over the books it pirated.

“Anthropic had no entitlement to use pirated copies for its central library,” the ruling reads.

Claude has proven to be lucrative. According to the ruling, Anthropic made over one billion dollars in annual revenue last year from corporate clients and individuals paying a subscription fee to use the AI chatbot. Paid subscriptions for Claude range from $20 per month to $100 per month.

Anthropic faces another lawsuit from Reddit. In a complaint filed earlier this month in Northern California court, Reddit claimed that Anthropic used its site for AI training material without permission.

A federal judge ruled for the first time that it was legal for $61.5 billion AI startup, Anthropic, to train its AI model on copyrighted books without compensating or crediting the authors.

U.S. District Judge William Alsup of San Francisco stated in a ruling filed on Monday that Anthropic’s use of copyrighted, published books to train its AI model was “fair use” under U.S. copyright law because it was “exceedingly transformative.” Alsup compared the situation to a human reader learning how to be a writer by reading books, for the purpose of creating a new work.

“Like any reader aspiring to be a writer, Anthropic’s [AI] trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,” Alsup wrote.

The rest of this article is locked.

Join Entrepreneur+ today for access.



Source link

You might also like