💥 Something big just happened—and not enough people are talking about it.
The U.S. Copyright Office has released a sweeping three-part report series on artificial intelligence, covering everything from deepfakes to AI-generated content to whether AI companies can legally use copyrighted materials to train their models. It’s the most important government action we’ve seen yet on AI and copyright—and it directly affects educators, librarians, students, and creators.
But just days before the release of the final installment, something alarming happened:
The head of the Copyright Office, Shira Perlmutter, was abruptly fired. No reason was given. The timing couldn’t be more suspicious.
Let’s unpack what this three-part report says—and why this sudden dismissal matters more than ever.
Here is the complete copyright office report, compiled into a single document for your convenience. While I would typically provide a link to the copyright office website, I've downloaded and included it here, given the current climate.
Part 1: Digital Replicas – AI Can Steal Your Face, Voice, and Identity
The first report tackles AI-generated deepfakes, voice cloning, and digital doubles—tools that can replicate real people with stunning (and disturbing) accuracy.
Key Findings:
There’s no federal law that protects everyday people from having their face or voice used by AI without consent.
The Copyright Office recommends a new national “right of publicity” law that applies to everyone—not just celebrities.
This law would require clear consent before your likeness could be digitally cloned and used, especially for commercial purposes.
Why it matters:
From fake student voice memos to AI-generated political ads, this report shows how anyone can be impersonated using AI. Educators, librarians, and students are all at risk without better protections.
Part 2: Copyrightability – Can AI Be an Author?
The second report asks a big question: Can a work created by AI be copyrighted?
Short answer:
No.
Copyright law only protects human creativity. AI can’t hold a copyright, and you can’t claim one unless you significantly shaped the final output.
Simply typing a prompt into ChatGPT or DALL·E? Not enough.
AI-assisted works are only copyrightable if a human contributed meaningful creative input—like editing, curating, or structuring the result.
Why it matters:
This is crucial for students, teachers, and content creators using AI. It clarifies what counts as authorship and what doesn’t. AI is a tool—not a creator. And if you rely on it too much, you may not be able to claim ownership of your work.
Part 3: AI Training & Copyright – Can They Use Your Work Without Asking?
The final and most controversial report tackles how AI models are trained—including whether it’s legal to use copyrighted material without permission.
The Copyright Office’s position:
It’s complicated. U.S. law doesn’t clearly say whether using copyrighted books, songs, or artwork to train AI models is legal.
Some tech companies claim it’s fair use, but others (like The New York Times) strongly disagree and are suing.
The Office stops short of taking a side, but urges Congress to step in and set clearer rules.
Why it matters:
AI tools may have already been trained on your writing, lesson plans, library guides, or curriculum materials. This report raises serious questions about who profits from that work—and whether creators deserve compensation or control.
Final Thoughts: This Isn’t Just Policy—It’s Power
These three reports lay the legal foundation for how AI will shape our future. They’re nuanced, thoughtful, and urgently needed. But the sudden firing of Shira Perlmutter, who led this work, sends a chilling message: When institutions speak truth to power, power sometimes bites back.
If you care about authorship, identity, and the integrity of the creative process, this moment matters.
Sorry about the link mistake. I fixed it and you can now access the full report.