Stop Policing AI. Start Teaching Literacy.
Student AI guidance from the Modern Language Association (MLA), with classroom-ready routines for schools
Based on guidance from the Modern Language Association of America (MLA), publisher of MLA Style.
If you are a librarian or educator trying to “deal with AI,” you are probably hearing three competing messages:
Use it, because students will need it.
Ban it, because it is cheating.
Ignore it, because the hype will pass.
None of those are a plan.
Generative AI is already shaping how students read, write, research, and study. The question is no longer whether students will use AI. They already are.
The real question is whether we will teach them to use it with judgment.
This Special Edition is grounded in two under-shared but high-credibility documents from the Modern Language Association of America and the MLA-CCCC Joint Task Force:
Student Guide to AI Literacy (Modern Language Association of America)
Working Paper 3: Building a Culture for Generative AI Literacy (MLA-CCCC Joint Task Force)
What is striking is how few people seem to have seen these compared to the Brookings report, which was widely reported and shared (and I wrote about here). That matters because the MLA guidance is not written for headlines. It is written for learning.
What this is (and what it is not)
Let’s be direct.
This is not an argument for banning AI.
This is not an argument for forcing AI into every assignment.
This is an argument for AI literacy, which means schools teach students to:
think critically about AI output
verify claims with real sources
protect their privacy
disclose use honestly
avoid dependence that weakens learning
The Student Guide emphasizes that ethical and effective AI use is an essential skill, but that it also creates risks like academic misconduct and loss of foundational skills in reading, writing, and research.
The AI Literacy Framework Students Actually Need
The Student Guide lays out seven competencies. This is one of the best “at-a-glance” frameworks I’ve seen because it is clear, student-centered, and practical.
1) Understand how generative AI works (basics, not buzzwords)
Students should be able to define generative AI, explain what a large language model is, and understand these systems rely on prediction and human intervention.
Why librarians care: Students treat AI like a search engine with opinions. It is not.
2) Follow policies and disclose AI use
Students should follow classroom and school rules and properly cite or attribute AI contributions.
The Task Force paper reinforces transparency approaches like footnotes, appendices, or documentation of prompts and outputs.
The shift: Replace “Did you cheat?” with “Show your process.”
3) Prompting is a skill, not a shortcut
The Student Guide frames prompting as something students refine through experimentation and practice.
Classroom reality: Students need to learn that the first answer is rarely the best answer.
4) Evaluate AI output for relevance and accuracy every time
Students must verify outputs against credible sources and recognize when AI is not appropriate for the task.
The Task Force working paper also warns that AI can produce confident falsehoods that sound true.
Library translation: AI can produce language. It cannot produce reliable evidence.
5) Monitor learning and avoid dependence
Students should be able to explain why they used AI and reflect on creativity and growth, especially to avoid overreliance.
The Task Force working paper also describes overreliance as a real learning risk.
What’s at stake: Students can “complete” school while quietly losing skill.
6) Remember: GenAI is not human communication
One of the strongest reminders in the Student Guide is that written communication happens between human writers and readers, even if AI supports the process.
In plain terms: A polished paragraph means nothing if the student cannot explain it.
7) Understand AI harms: environment, labor, bias, privacy
This is where the Student Guide goes further than most school policies. It includes harms and risks that are not academic, but are absolutely ethical: environmental impact, labor impact, social bias, and privacy and data security risks.
The Task Force paper reinforces the importance of addressing bias and privacy concerns and helping students think critically about how their data may be used.
This matters: AI literacy is not only academic integrity. It is information ethics.
What schools should stop doing
Stop pretending AI detection tools are a solution
If your school’s AI strategy is built on detection, you are building a culture of suspicion.
The Task Force paper highlights limitations around detection tools and also flags research showing detectors can unfairly flag multilingual writers and non-native English speakers.
If our response to AI creates new inequities, we are not solving the problem. We are multiplying it.
What to do instead (3 practical moves that work this week)
You do not need a new program. You need new routines.
Move 1: Teach the “AI Output Audit” (15 minutes)
Goal: Students practice evaluation as a default habit.
Provide a teacher-generated AI paragraph on a class topic.
Students label sentences:
Accuracy: What must be verified?
Relevance: Does it meet the assignment?
Bias: What assumptions are present?
Students must verify two claims with credible sources.
Move 2: Require an “AI Use Statement”
Copy and paste this into any assignment directions:
AI Use Statement (student copy/paste):
I used (tool name) for: ____________________.
I used it at this stage: ____________________.
I verified information by: ____________________.
I changed the output by: ____________________.
Move 3: Use the “Before you use AI” checklist (1 minute, every time)
Put this on a slide or post it in the library:
Before you use AI, ask:
What am I responsible for learning here?
Am I using AI to support thinking, or replace it?
What claims do I need to verify with real sources?
What personal information am I sharing?
A school-ready micro-policy you can reuse
If you need one paragraph that is clear and enforceable, use this:
Students may use generative AI for brainstorming, outlining, drafting support, revision suggestions, and study help when permitted. Students must disclose AI assistance and remain responsible for originality, accuracy, evidence, and citations.
Everything above is the preview portion you can share broadly with staff and families.
Below is the full paid section with student-facing tools, rubrics, and copy/paste policy language you can implement immediately.
If your school is still treating generative AI as either “cheating” or “the future,” you are going to miss what actually matters.
AI literacy is not a tool decision. It is a learning design decision.
And if we want students to build real competence, we have to stop relying on vague warnings and start giving them repeatable routines that:
protect learning
build ethical judgment
increase transparency
reduce academic integrity conflicts
support equity for multilingual writers and students with disabilities
This section gives you ready-to-use language and frameworks you can implement immediately.
Preview Before Paywall (Read this even if you skim)
If your school is still debating whether to “ban AI” or “embrace AI,” you are already behind the reality students are living in.
Students are using generative AI for homework, studying, writing, and research whether we teach it or not. That means the real risk is not that students have access to AI. The real risk is that they are using it without literacy skills and without adult guidance.
And here is the most important part: AI literacy is not a tech initiative. It is a literacy and equity issue.
When schools respond with surveillance, vague warnings, or inconsistent rules, the results are predictable:
students hide their process instead of learning from it
honest students get punished for unclear expectations
multilingual learners face higher risk of unfair suspicion
teachers burn out trying to enforce rules that do not match reality
librarians get pulled into “AI policing” instead of teaching research and evaluation
Everything above gives you the big picture and the “why.”
Below is the part that actually changes practice.
In the full paid section, you will get copy/paste resources you can implement tomorrow morning, including:
A Student AI Literacy Checklist written in student-friendly language
A ready-to-use AI Use Statement that reduces cheating arguments and builds transparency
A 4-level rubric that separates AI use that supports learning from AI use that replaces learning
Six assignment redesign “swaps” that stop AI misuse without banning AI
Three mini-lessons that fit into library instruction, ELA, and content area research projects
School-ready policy language written for students, teachers, and libraries
If your staff is stuck, overwhelmed, or divided, this is the section you will actually share.



