top of page
Search

The $1.5 Billion Distraction: Why Bartz v. Anthropic Fails Authors

ree

The Bartz vs. Anthropic judgment in September has been on my mind lately. At first pass, it is a clear win for writers whose works were used to train these models. The proposed settlement of $3,000 per copyrighted work used is a figure that gives a lot of authors pause. With that said, as I’ve had time to settle and move past the initial exciting figures, I am haunted by the precedent that this ruling establishes.


The initial complaint of the class action lawsuit was twofold. First, that use of their books to train Anthropic’s LLMs could result in the production of works that compete with and displace demand for their books. In addition, the suit alleged Anthropic’s unauthorized use has the potential to displace an emerging market for licensing the writers’ works for the purpose of training LLMs.


The court’s ruling in September focused on the first issue and essentially ignored the second. The final ruling was made entirely on the basis of what constitutes “fair use” in the context of the US legal system. At play were two different types of training material. The first tool Anthropic used for their training were pirated digital copies of books. The judgment of $3,000 per book is entirely related to these copies. A nice summary of the court’s findings is presented below:

“The court reached a different conclusion with respect to the pirated copies. Because Anthropic never paid for the pirated copies, the court thought it was clear the pirated copies displaced demand for the authors’ works, copy for copy. The fact that pirated copies would later be used for a purpose the court found to be transformative — training LLMs — did not dissuade them against finding no fair use.”Landmark Ruling on AI Copyright: Fair Use vs. Infringement in Bartz v. Anthropic | ArentFox Schiff

What this shows is that the courts have identified a breach of the current fair use doctrine and that this settlement is meant to redress the violation of copyright law. Importantly, however, the second pathway to digital training material—used physical books—was deemed fair use of the copyrighted work. This still seems crazy to me. Open any book to the copyright page and the list of rights is clearly spelled out. The exact text varies, with some newer books even having explicit prohibitions on training machine learning. I pulled the below from a copy of Inversions by Iain Banks, which was originally published in 1998:

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or any means, without the prior permission in writing of the publisher, nor be otherwise circulated in any form of binding or cover other than that with which it was published and without a similar condition including this condition being imposed upon the subsequent purchaser.

I really do not understand how the courts have looked at these types of rights statements and decided that the prohibition on reproduction and storage in a retrieval system does not violate the stated rights of the author and publisher. But in doing so, they have opened Pandora’s box. In its current form, the ruling is a slap on the wrist. This is a company currently valued at over $180 billion being asked to pay a $1.5 billion fine. In August 2025, Anthropic announced that the current run rate, or forecasted annual revenue, is $5 billion. The settlement is therefore a drop in the bucket.


More alarmingly, if the judgment is the final word on fair use, then the pathway for future LLM training is clear. If used book scanning is allowed to generate training data, then that will become the default pathway for these companies. Physical books written before 2022 have a unique value to LLM training. As these works themselves contain no input from LLMs, they prevent training on data that already incorporated AI outputs. In a sense, printed books provide rare virgin material inputs to an industry that is suffering from AI hallucinations from training on data rife with AI input.


This future excludes authors, plain and simple. The one-time judgment payment will not be something that is repeated. The future does not hold $3,000-per-work payments for writers. Instead, authors can expect their work to be fed to machine learning algorithms, ripped from the covers of their published work. This ruling is a harbinger of dark times for authors ahead, rather than a bright future of responsible and compassionate management of authors’ rights to their own creations.


As always, I think that the most damning information here is that when looking at the book landscape, savvy individuals in tech identified—correctly—that used books are the weak spot in rights management. They found a way to exploit authors’ work without paying for this material. The future looks like this pathway will turn from a guess to a confirmed pathway in the US legal system. The best defense against this would be an expansion of rights ownership to used sales. In the absence of this legislation, the best thing that we can do as readers is to buy from sellers that prioritize author support in one way or another.


 
 
bottom of page