The Most Dangerous AI Tool for Libraries Yet
How Class-Shelf Plus v3 quietly turns censorship into an automated workflow and why every librarian should be alarmed.
Special Edition: Every Librarian Needs to Read This
AI, Censorship, and the Alarming Rise of Automated Book Flagging
A few weeks ago, I wrote about how AI is being used in book censorship. I want to let you know about a new investigation that every librarian, educator, and district leader needs to read. Jason Koebler’s article, “AI Is Supercharging the War on Libraries, Education, and Human Knowledge”, exposes how CLCD’s new product, Class-Shelf Plus v3, is being positioned as an AI solution for “managing” classroom and school collections.
Here is the article:
https://www.404media.co/ai-is-supercharging-the-war-on-libraries-education-and-human-knowledge/
And here is the vendor’s press release:
https://librarytechnology.org/pr/31869
As someone who has experienced censorship, has lost a job because of it, and writes extensively about AI in education and libraries, this trend is deeply concerning. AI tools are now being built and marketed in ways that speed up the very processes used to remove books. We need to be clear about what that means.
When one person’s judgment becomes an algorithm
What is considered “questionable” to one person may not be to another. Librarians understand this. Our work is grounded in professional review, community standards, developmentally appropriate practice, and a commitment to intellectual freedom.
AI systems do not understand nuance.
They follow instructions.
They mirror the criteria they are given.
And right now, those criteria are being shaped by political pressure. When you tell an algorithm to flag titles for “sensitive content” or “potentially controversial themes” in today’s climate, it will overwhelmingly identify books about LGBTQ lives, race, immigration, and honest history. That is not speculation. It is the predictable outcome of building a screening tool inside an environment where those topics are already under attack.
Once the criteria exist, the model applies them at scale with no context and no understanding of student needs. A book becomes a pattern match. A life experience becomes a risk signal.
“It’s going to be used to remove books.”
The article includes an interview with Jaime Taylor, discovery and resource management systems coordinator for the W. E. B. Du Bois Library at UMass Amherst. Her warning is one that every librarian should take seriously:
“I am seeing a tool that’s going to be used for censorship because this large language model is ingesting all the titles you have, evaluating them somehow, and then it might spit out an inaccurate evaluation. Or it might spit out an accurate evaluation and then a strapped-for-time librarian or teacher will take whatever it spits out and weed their collections based on it. It’s going to be used to remove books from collections that are about queerness or sexuality or race or history. But institutions are going to buy this product because they have a mandate from state legislatures to do this, or maybe they want to do this.”
Her point is exactly right. Even if an AI system produces inaccurate results, the risk is that someone under pressure or short on time will accept them as legitimate. And if the tool produces accurate results based on biased criteria, the outcome is simply more efficient censorship.
Why this moment should concern all of us
Class-Shelf Plus v3 is being sold as an efficiency tool, with claims that it can reduce manual review workloads by “more than 80 percent.” That framing ignores the bigger issue. When AI provides the list, the burden shifts to the human to prove why a book should stay. That is the reverse of how ethical collection development works.
This is especially dangerous for students who already see their identities challenged in public discourse. When the same stories that are targeted ideologically also become the ones flagged algorithmically, the result is not neutrality. It is amplified harm.
My concern, as both a librarian and someone who writes about AI
This work is personal. I have lived through censorship efforts. I have been the target of organized pressure campaigns. I have lost a job for defending students’ right to read. And I also spend a great deal of time studying AI’s role in education.
From that vantage point, tools like Class-Shelf Plus v3 are not just concerning. They are a direct threat to the foundational values of our profession. They turn political anxieties into automated workflows. They normalize the idea that librarians should follow an algorithm rather than exercise professional judgment. They frame removal as compliance, not censorship.
We need to say this plainly. AI systems built to “flag” books are not neutral.
They replicate the worldview they are built on. And right now, that worldview is hostile to many of the students we are supposed to serve.
A call to action
Read the article. Share it with colleagues. Ask your district if they are considering tools like Class-Shelf Plus v3. Push for policy before procurement, and insist on transparency and safeguards when AI intersects with collection development.
This tip just came from a reader on BlueSky, which I needed to add here - This is the CDLC contact information if you choose to let them know how concerned you are about this:
Help: help@clcd.com
Sales: sales@clcd.com
Phone: (888) 611-2523
Fax: (888) 611-2524
Above all, do not let automated systems replace the careful, human work that librarians do every day. Our students deserve better than an algorithm that decides which stories are safe enough for them to read.



I do not work in a state that has legislation that bans certain books. but AI is still affecting my collection. I work in a middle school library and have a new administrator. He is using AI to check over my orders before I send them in. We have decided on a process where he sends me the books he wants to know more about that were flagged by AI and then I show him the book reviews that indicate they are appropriate for our students and relevant for our collection. So far it is working okay, but I do not like the direction things are going. We should all be concerned.