Last week, Anthropic, the AI company behind the Claude chatbot, settled a landmark class-action lawsuit for $1.5 billion. The amount is very large in the context of copyright legal cases, yet it represents just a fraction of Anthropic’s estimated $183 billion valuation.
Authors and publishers, led by figures like Andrea Bartz and Charles Graeber, accused Anthropic of illegally downloading millions of pirated books from shadow libraries like Library Genesis to train Claude, violating copyright law. The settlement will compensate roughly 500,000 authors and publishers at about $3,000 per affected work. While Anthropic didn’t admit liability, it agreed to destroy the illicit files and pay authors, avoiding a trial. The Authors Guild hailed the outcome as a precedent for licensing content in AI development.
This case raises questions about property rights in the age of Large Language Models (LLMs). Courts have ruled that recombining existing texts into new outputs qualifies as fair use, but the Anthropic lawsuit hinged on the piracy itself, not the training process. What should the law say about compensating authors whose works indirectly fuel AI innovation? The answer could shape not just fairness but the future quality of AI.
The term “AI slop” increasingly describes low-quality, machine-generated text produced with minimal human oversight. If human writing ceases to be a viable career due to inadequate compensation, will LLMs lose access to fresh, high-quality training data? Could this create a feedback loop where AI models, trained on degraded outputs, stagnate? This dilemma mirrors the classic “access versus incentives” debate in intellectual property law: Access to a rich corpus of human-written text today enables entrepreneurs to build powerful, affordable LLMs. But without incentives for human authors to keep producing, the well of quality training data could run dry.
This case also blurs the traditional divide between copyright and patents. Copyrighted material, once seen as static, now drives “follow-on” innovation derived from the original work. That is, the copyright protection in this case affects AI-content influenced by the copyrighted material in a way that previously applied to new technology that built on patented technical inventions. Thus, “access versus incentives” theory applies to copyright as much as it used to apply to patents. The Anthropic settlement signals that intellectual property law, lagging behind AI’s rapid evolution, must adapt. Authors might need compensation, but halting AI progress to resolve legal disputes risks stifling innovation.
At $1.5 billion, the settlement’s size sends a clear message: bypassing legal channels could be costly. This could deter smaller AI firms from entering the market, especially as similar lawsuits loom against other companies. The precedent may push developers toward licensing deals or public domain data, raising costs and potentially concentrating the AI industry among deep-pocketed players like Anthropic, backed by billions in funding. Smaller startups, unable to afford licensing or litigation, may struggle. This would become a case of regulatory barriers favoring incumbents. Could Anthropic’s willingness to pay such a hefty sum reflect a strategic move to fortify a moat around well-capitalized AI firms, discouraging upstarts?
In a 2024 post, I speculated that AI companies, flush with cash, might strategically hire writers to replenish the commons of high-quality text. In that post, I wrote:
“AI companies have money. Could we be headed toward a world where OpenAI has some paid writers on staff? Replenishing the commons is relatively cheap if done strategically, in relation to the money being raised for AI companies.”
The Anthropic settlement partly validates this idea. For an AI arms race in which Mark Zuckerberg spends millions luring engineers from OpenAI, $1.5 billion seems like a modest price for a chance of establishing AI dominance.
For now, the Anthropic case marks a pivotal moment. It underscores the need for a balanced approach and sets the stage for how AI and intellectual property law will coexist in an era of unprecedented technological change.
Although, at a certain point, the LLMs might reach a take-off point where they are so intelligent and agentic that they do not need new input from humans anymore. That is a horizon beyond which I cannot see.
Joy Buchanan is an Associate Professor of economics at Samford University. She blogs at Economist Writing Every Day.