AI Copyright Rulings: Fair Use Upheld, Piracy on Trial

Share
Llama AI by Meta US Court Ruling.

Llama AI by Meta.

US Courts Deliver Mixed Rulings in Landmark AI Copyright Battles: Meta Wins Dismissal, Anthropic’s Fair Use Upheld but Piracy Claims Persist: San Francisco, CA – June 30, 2025 – In a pivotal week for the burgeoning artificial intelligence industry and the realm of intellectual property, two separate but closely watched lawsuits targeting tech giants Meta Platforms and Anthropic (developer of the Claude AI) have yielded complex rulings from US federal courts. While both companies secured partial victories, the decisions underscore the nascent and evolving legal landscape surrounding AI training on copyrighted materials, leaving ample room for future litigation and a clearer distinction between “transformative use” and illicit data acquisition.

Meta’s Dismissal: A Win on Technicality, Not Total Vindication

On Wednesday, June 25, US District Judge Vince Chhabria dismissed a copyright infringement lawsuit brought against Meta by a group of 13 prominent authors, including comedian Sarah Silverman and acclaimed writer Ta-Nehisi Coates. The authors had alleged that Meta unlawfully used their copyrighted books to train its advanced large language models (LLMs), such as Llama.

Judge Chhabria’s decision, however, was not a blanket endorsement of Meta‘s practices. Instead, it largely hinged on the plaintiffs’ failure to sufficiently prove their case, particularly concerning “market harm” or “market dilution.” The judge stated that the authors “made the wrong arguments” and did not present compelling evidence that Meta’s AI outputs directly competed with or diminished the value of their original works. Arguments from the plaintiffs that Meta’s AI could reproduce exact snippets or hurt their ability to license their books to AI companies were deemed “clear losers.”

Crucially, Judge Chhabria emphasized that the ruling “does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.” He suggested that the use of copyrighted content for AI training could be unlawful “in many circumstances” and pointedly remarked that if using such works is necessary for AI products projected to generate “billions, even trillions of dollars,” then companies “will figure out a way to compensate copyright holders for it.” This narrow victory for Meta serves more as a procedural dismissal than a definitive legal precedent absolving the company of all future copyright challenges related to its AI training data. The ruling applies only to the 13 authors involved and does not prevent other creators from pursuing similar claims with stronger evidentiary arguments.

Claude AI.

Claude AI.

Anthropic’s Fair Use Upholding, But Piracy Remains a Hurdle

Just two days prior, on Monday, June 23, in a distinct but equally significant ruling, US District Judge William Alsup found largely in favor of Anthropic in a lawsuit brought by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson. Judge Alsup ruled that Anthropic‘s act of training its Claude chatbot on millions of copyrighted books constituted “fair use” under US copyright law because it was “quintessentially transformative.” He reasoned that the process of an AI system distilling information from vast quantities of text to generate new passages was akin to a human learning from books to create original works.

This aspect of the ruling marks a significant win for the AI industry, as it provides a judicial endorsement of the “transformative use” argument often put forth by AI developers. However, Judge Alsup drew a stark line when it came to the method of acquiring the training data. He explicitly stated that while the training itself might be fair use, Anthropic must still face a trial regarding its alleged acquisition of copyrighted books from “pirate websites” or “shadow libraries.”

Judge Alsup unequivocally stated, “Anthropic had no entitlement to use pirated copies for its central library” and that “Creating a permanent, general-purpose library was not itself a fair use excusing Anthropic’s piracy.” This means that despite the fair use finding for the training process, Anthropic could still face substantial damages—up to $150,000 per pirated work—for how it sourced some of its training material. This trial is reportedly scheduled for December.

Broader Implications: A Shifting Legal Landscape

These back-to-back rulings from the San Francisco federal court are among the first substantive judicial decisions on how copyright law, particularly the fair use doctrine, applies to the training of generative AI models.

Taken together, the rulings suggest a nuanced legal framework is beginning to emerge:

Transformative Use is Key: The acts of training AI models on copyrighted works can indeed be deemed “transformative,” potentially qualifying for fair use protection.

Lawful Acquisition is Paramount: However, this fair use defense does not extend to the unlawful acquisition of copyrighted material. Judges appear firm on the principle that how data is obtained matters, and piracy will not be excused.

Market Harm Remains a Battleground: Future plaintiffs seeking to sue AI companies will need to present robust evidence of direct market harm or dilution caused by AI-generated outputs. Speculation or underdeveloped arguments will likely be insufficient.

Invitation for Future Cases: Both judges, while ruling in favor of the AI companies on certain points, subtly invited future, better-argued cases, especially those that can clearly demonstrate economic damage to creators.

The decisions set a precedent that could influence dozens of similar lawsuits currently pending against other major AI companies like OpenAI and Stability AI. Legal experts anticipate that appeals are virtually certain, indicating that the complex interplay between artificial intelligence innovation and long-standing copyright protections will continue to be a defining legal battleground in the years to come.

Comments are closed.