New York Times Reporter Challenges Big Tech Over AI Training and Copyright Use

A fresh legal battle is unfolding at the intersection of journalism, publishing, and artificial intelligence, as a prominent New York Times reporter has taken some of the world’s most powerful technology companies to court. The lawsuit raises a fundamental question that has been quietly troubling writers and creators for years: where does innovation end and unauthorized use begin?

John Carreyrou, an investigative journalist widely respected for uncovering the Theranos scandal and author of the bestselling book Bad Blood, has filed a lawsuit in a California federal court against several major artificial intelligence players. The defendants include Google, OpenAI, Meta Platforms, Anthropic, Perplexity, and Elon Musk’s xAI. At the heart of the case is an allegation that these companies used copyrighted books without permission to train the large language models that now power popular AI chatbots.

Carreyrou is not alone in this legal fight. Five other writers have joined him, all accusing the companies of systematically copying and ingesting their books into AI systems. According to the complaint, these works were not licensed, paid for, or authorized in any meaningful way. Instead, the writers argue, their intellectual property was treated as raw material for commercial technologies that now generate massive profits.

For many authors, this case feels deeply personal. Writing a book is often the result of years of reporting, research, and lived experience. Carreyrou’s own career is an example of this investment. His reporting on Theranos did not just expose corporate fraud; it also reshaped public understanding of Silicon Valley’s culture of hype and unchecked ambition. Seeing such work allegedly absorbed into AI systems without consent has sparked anger and concern far beyond a single courtroom.

The lawsuit is part of a growing wave of copyright challenges facing the tech industry as AI tools rapidly expand. Writers, artists, musicians, and news organizations have increasingly questioned whether training AI on copyrighted material amounts to fair use or outright infringement. What makes this case stand out is its scope and strategy. It is the first lawsuit to name xAI as a defendant, and it deliberately avoids becoming a class action.

image

The writers involved argue that class action lawsuits often end up benefiting defendants more than plaintiffs. By bundling thousands of claims together, companies can negotiate a single settlement that may look large on paper but translates into relatively small payouts for individual creators. The complaint captures this frustration clearly, stating that “LLM companies should not be able to so easily extinguish thousands upon thousands of high-value claims at bargain-basement rates.” This language reflects a broader fear that authors’ rights are being diluted in the rush to regulate AI at scale.

This concern is not theoretical. Earlier this year, Anthropic reached what was described as the first major settlement in an AI training copyright dispute. The company agreed to pay $1.5 billion to a class of authors who claimed their books had been pirated and used to train AI models. While the figure sounded substantial, critics quickly pointed out its limitations. According to the new lawsuit, individual class members in that case are expected to receive “a tiny fraction (just 2%) of the Copyright Act’s statutory ceiling of $150,000” per infringed work. For writers who see their books as both creative achievements and financial lifelines, that outcome felt deeply unsatisfying.

Carreyrou has been especially vocal about this issue. In a later court hearing related to the Anthropic case, he described the alleged copying of books as Anthropic’s “original sin” and argued that the settlement failed to reflect the seriousness of the violation. His words resonated with many in the publishing world, where there is growing unease about how easily years of work can be absorbed into machines that generate content in seconds.

The legal drama has also taken on an unusual personal dimension. The lawsuit was filed by attorneys at Freedman Normand Friedland, including Kyle Roche. Roche is not an anonymous legal figure; he was previously profiled by Carreyrou himself in a 2023 New York Times article. This overlap has added another layer of intrigue to the case, blurring the line between professional roles and past reporting relationships.

During a November hearing in the Anthropic class action, U.S. District Judge William Alsup criticized a separate law firm co-founded by Roche for encouraging authors to opt out of the settlement in search of what he described as “a sweeter deal.” Roche declined to comment on the new lawsuit, but the judge’s remarks highlight how contentious and complex these disputes have become. Courts are not just weighing legal definitions of copyright and fair use; they are also navigating competing strategies among plaintiffs and their representatives.

So far, the companies named in the lawsuit have not publicly responded to requests for comment. This silence is not unusual in early stages of high-profile litigation, but it underscores the uncertainty surrounding how the tech industry plans to defend its AI training practices. Many companies argue that large language models do not store or reproduce books in a traditional sense, but rather learn patterns from vast amounts of text. Authors counter that this distinction feels abstract when the end result is a system capable of generating content that closely mirrors human writing styles built from their work.

From a broader perspective, this case reflects a cultural shift in how society views data, creativity, and ownership. Artificial intelligence promises efficiency, accessibility, and new forms of expression, but it also forces a reckoning with the value of human labor behind the data. For journalists like Carreyrou, the issue is not opposition to technology itself, but resistance to a system that appears to reward innovation while sidelining the creators who made that innovation possible.

Public opinion on these lawsuits remains divided. Some see them as necessary checks on powerful corporations that moved too fast and asked for forgiveness later. Others worry that aggressive litigation could slow technological progress or entrench only the largest players who can afford licensing deals. What remains clear is that the legal and ethical rules governing AI training are far from settled.

👁️ 169K+
Kristina Roberts

Kristina Roberts

Kristina R. is a reporter and author covering a wide spectrum of stories, from celebrity and influencer culture to business, music, technology, and sports.

MORE FROM INFLUENCER UK

Newsletter

Influencer Magazine UK

Subscribe to Our Newsletter

Thank you for subscribing to the newsletter.

Oops. Something went wrong. Please try again later.

Sign up for Influencer UK news straight to your inbox!