In June 2025, two federal judges delivered rulings that established AI training on copyrighted content as fair use — meaning AI companies can use creators’ work without permission or payment. Judge Chhabria reluctantly granted Meta summary judgment, warning this approach would likely be illegal “in most cases.” Judge Alsup found Anthropic’s training was transformative fair use, though he separately ruled their use of pirated books was copyright infringement. The core issue: billions of dollars worth of creative work now powers AI systems without creators receiving compensation.

The June rulings: fair use without payment

Sarah Silverman had reasonable grounds to expect success. A successful comedian and author suing a tech company that had used her copyrighted work to train AI without permission or payment.

“Your life’s work powers a system worth $90 billion, but that’s considered acceptable.”

On June 25, 2025, Judge Vince Chhabria granted Meta summary judgment in Kadrey v. Meta, ruling that using copyrighted books to train AI constituted fair use. The core issue wasn’t piracy — it was whether AI companies could use any copyrighted content without compensating creators. The court said yes, they could.

But Judge Chhabria was openly critical, stating that the plaintiffs “made the wrong arguments.” More significantly, he warned that “in most cases the answer will likely be yes” that AI training on copyrighted works will be illegal. He emphasised: “this ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”

Two days earlier, Judge William Alsup delivered a similar ruling in Bartz v. Anthropic. The primary question: could Anthropic use copyrighted books to train AI without paying authors? The answer: yes, if the use was transformative enough. Judge Alsup found the AI training constituted fair use.

However, Alsup separately addressed Anthropic’s sourcing methods. The company had used millions of pirated books, which the judge ruled was copyright infringement. This distinction reveals something telling: whilst courts might accept AI training as fair use, Anthropic’s casual use of pirated content showed little concern for how they acquired the material in the first place.

Both rulings established the same core principle: AI training on copyrighted content can constitute fair use, meaning creators don’t get paid even when their work directly powers billion-dollar AI systems. The piracy issues were secondary legal problems, but they revealed how little AI companies worried about content sourcing when building their training datasets.

The economics of free content

The numbers make the situation clear. OpenAI’s valuation hit $90 billion. Microsoft gained over a trillion dollars in market capitalisation from AI investments. All built on copyrighted content from creators who receive nothing under these fair use rulings.

“When paying creators would affect profits, they simply took the content for free.”

Court documents revealed that Meta actually tried licensing books but abandoned the effort when publishers wanted terms that weren’t “scalable.” This wasn’t about legal versus illegal sources — it was about whether to pay creators at all. When payment would affect profits, they took the content for free and argued fair use.

Who gets paid and who doesn’t

The situation becomes complex when examining who actually receives compensation. Individual creators get nothing under fair use rulings, but major publishers are securing substantial licensing agreements. News Corp reportedly has a deal worth tens of millions per year with OpenAI. The Associated Press, Financial Times, and Reuters have all signed agreements.

“If you can’t bundle millions of pieces of content, you’re simply not worth AI companies’ time to negotiate with.”

The difference comes down to scale, leverage, and legal resources. Major publishers control millions of articles and can afford years of legal battles. Individual creators have none of these advantages. The system creates two classes: large publishers who negotiate payment, and everyone else whose work gets used without compensation under fair use doctrine.

Why fair use feels unfair to creators

“The legal system that’s supposed to protect intellectual property has shifted away from protecting individual creators.”

These fair use rulings expose the complete imbalance of power between individual creators and AI companies. Individual authors face billion-dollar companies with unlimited legal budgets. The courts prioritise technological innovation over creator compensation, even when that technology directly profits from creators’ work.

Authors’ Guild CEO Mary Rasenberger captured the frustration: “We disagree with the decision that using pirated or scanned books for training large language models is fair use.” But disagreement doesn’t change the reality. The legal precedent now allows AI companies to use copyrighted content without payment, calling it transformative fair use.

Some people are still fighting back

Not every story ends in defeat. The New York Times took a different approach entirely. Instead of arguing about whether AI training was fair use, they focused on what ChatGPT actually produces. They demonstrated that the AI reproduces “near-verbatim excerpts” of their articles. Judge Sidney Stein allowed their case to proceed.

The one case where creators actually won

In February 2025, something unusual happened. A creator actually beat an AI company in court. Thomson Reuters defeated Ross Intelligence because Ross used Westlaw’s legal headnotes to build a direct competitor.

“Courts still recognise direct competition as problematic, even if they’re comfortable letting AI companies use content for broader purposes.”

Judge Stephanos Bibas ruled this wasn’t transformative. “Ross took the headnotes to make it easier to develop a competing legal research tool,” he wrote. The distinction matters. Ross wasn’t building general-purpose AI — they were using Thomson Reuters’ work to compete directly with Thomson Reuters.

Professional organisations continue the fight

Professional organisations continue fighting collective battles. The Authors’ Guild advocates for writers’ rights despite multiple court losses. Music industry groups have filed lawsuits against AI music generators. Visual artists organise through groups like the Concept Art Association.

Some AI companies are beginning to acknowledge concerns. Smaller firms that can’t afford massive legal battles are pursuing licensing deals voluntarily. Some implement opt-out systems allowing creators to exclude their work from training data.

European opt-out rights offer real protection for creators

Whilst US courts embrace AI-friendly interpretations of fair use, European jurisdictions offer meaningful protection for content creators. The EU AI Act requires AI companies to implement opt-out technologies and honour creator requests to exclude their work from training.

“European legislation provides proactive protection. You don’t need to sue after the fact; you can prevent unauthorised use before it happens.”

Under Article 53, AI companies must establish copyright policies and use technology to honour opt-out requests. By August 2025, they must provide detailed summaries of training data sources. The penalties for non-compliance are substantial, making it economically sensible to respect opt-out requests.

How the opt-out system works

Creators can protect their work using several technical standards:

  • Robots.txt files: Website-level blocks preventing AI crawlers from accessing content
  • TDM Reservation Protocol: JSON files allowing specific restrictions on AI training activities
  • HTML metadata tags: Instructions embedded in content prohibiting AI training
  • Do-Not-Train registries: Central databases where creators register works for exclusion

The UK government is moving towards similar protections. A December 2024 consultation proposing AI training exceptions unless rights holders opt out received over 11,500 responses.

A potential solution: Real-time source attribution

One technical solution that could address creator recognition involves implementing real-time source attribution in AI responses. Instead of fighting about training data legality, AI systems could cite specific sources that informed each output, similar to academic papers.

“Instead of fighting legal battles about training data, creators could focus on producing high-quality content that AI systems want to cite.”

Perplexity AI demonstrates how this works, providing specific citations for each claim. MIT’s Data Provenance Initiative is developing tools that trace dataset lineage, allowing practitioners to identify which sources contributed to outputs.

For creators, this offers recognition. When their work influences an AI response, they get credited. Users can discover original creators. This creates pathways for audience building and monetisation through recognition rather than direct licensing payments.

The challenge lies in developing attribution systems that credit creators without compromising AI companies’ business models. Industry groups support transparency but emphasise protecting trade secrets. EU transparency requirements might make attribution systems more attractive than broader training restrictions.

What this means moving forward

The 2025 rulings established that AI training on copyrighted content can constitute fair use without creator compensation. Individual creators have limited recourse when their work powers billion-dollar AI systems, but alternative strategies are emerging that could provide recognition and value.

The resistance continues. The New York Times case could establish new precedents about AI outputs. Professional organisations are building collective bargaining power. European legislation provides proactive protection. Attribution technologies could create new pathways for creator recognition.

The reality for creators

“We’re experiencing a substantial transfer of creative value. Billions of dollars worth of human creativity now powers AI systems that generate enormous profits for a handful of companies.”

We’re experiencing a substantial transfer of creative value under the banner of fair use. Billions of dollars worth of human creativity now powers AI systems that generate enormous profits for a handful of companies, whilst original creators receive nothing. The legal system has endorsed this approach, prioritising innovation over creator compensation.

For most individual creators, the situation feels discouraging. You can’t afford to sue trillion-dollar companies. You can’t prove the specific type of market harm courts require. You can’t compete with publishers who bundle millions of articles and have leverage to negotiate actual licensing deals.

“Individual creators may feel powerless, but they’re not alone. Sometimes persistence is all you have.”

But resistance continues. Professional organisations fight collective battles. Some AI companies choose voluntary licensing over fair use arguments. International pressure builds. Alternative legal strategies emerge. Individual creators may feel powerless against fair use rulings, but they’re not alone. Sometimes persistence is all you have: keep working, keep creating, and keep supporting the organisations that fight for fair compensation.ing the organisations that fight for your rights.

By Ben

Leave a Reply

Your email address will not be published. Required fields are marked *