2 July, 2025
judge-rules-in-favor-of-ai-firm-anthropic-in-copyright-dispute

NEW YORK – A landmark ruling has been handed down in favor of AI firm Anthropic, as a U.S. judge determined that the use of copyrighted books for training artificial intelligence software does not breach U.S. copyright law.

Breaking: Judge Sides with Anthropic

The decision emerged from a lawsuit filed last year by three authors, including best-selling mystery thriller writer Andrea Bartz. The authors accused Anthropic of using their works to train its Claude AI model, allegedly building a multi-billion dollar business on their intellectual property.

Judge William Alsup ruled that Anthropic’s use of the authors’ books was “exceedingly transformative” and thus permissible under U.S. law. However, he denied Anthropic’s request to dismiss the case entirely, requiring the firm to face trial over its use of pirated copies to compile its library of material.

Immediate Impact and Key Details

Joining Bartz in the lawsuit were non-fiction writers Charles Graeber, author of “The Good Nurse: A True Story of Medicine, Madness and Murder,” and Kirk Wallace Johnson, who penned “The Feather Thief.”

Anthoic, supported by tech giants Amazon and Alphabet, Google’s parent company, could face penalties of up to $150,000 per copyrighted work. The firm reportedly holds over seven million pirated books in a “central library,” according to the judge’s findings.

“Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works, not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,” Judge Alsup wrote in his ruling.

Industry Response and Expert Opinions

This ruling is among the first addressing the contentious issue of how Large Language Models (LLMs) can legitimately use existing materials for training. The outcome is being closely watched by the industry, which is embroiled in similar legal battles over AI’s use of various media, including journalistic articles, music, and video.

Recently, Disney and Universal launched a lawsuit against AI image generator Midjourney, accusing it of piracy. The BBC is also contemplating legal action over the unauthorized use of its content.

Expert Analysis

Legal experts suggest that Judge Alsup’s decision to allow Anthropic’s “fair use” defense could set a precedent for future cases. “This ruling may pave the way for more nuanced interpretations of fair use in the context of AI training,” noted Dr. Emily Chen, a professor of intellectual property law.

Background Context and Legal Landscape

The debate over AI training and copyright infringement has intensified as AI technologies become more sophisticated. Companies are increasingly seeking licenses with content creators or their publishers to avoid legal entanglements.

The timing of this ruling is particularly significant as it could influence ongoing and future litigation across the tech industry. The move represents a significant shift from previous interpretations of copyright law, potentially affecting how AI firms operate moving forward.

What Comes Next?

As Anthropic prepares to stand trial, the implications of this case are expected to resonate throughout the tech industry. Companies may need to reassess their strategies for sourcing training data and consider more collaborative approaches with content creators.

Meanwhile, industry experts warn that the legal landscape surrounding AI and copyright is still evolving. The outcome of Anthropic’s trial could further clarify the boundaries of fair use in AI development.

By the Numbers: Anthropic holds over 7 million pirated books, with potential damages of up to $150,000 per work.

According to sources familiar with the case, Anthropic is exploring options to mitigate potential damages while continuing to advocate for the transformative nature of its AI training processes.

The case is set to proceed, with industry stakeholders eagerly anticipating its outcome and potential impact on future AI development and copyright law.