Turning It Off Isn’t Enough
What Educators Need to Understand About Social Media Algorithms
If you have ever told a student to “just switch to chronological order,” this study deserves your attention.
A new randomized field experiment published in Nature examined what happens when users switch between X’s algorithmic feed and a chronological one.
The findings are clear and more nuanced than most headlines.
Turning the algorithm on shifted political attitudes in a measurable conservative direction.
Turning it off did not reverse those shifts.
That asymmetry is the real story.
What the Researchers Did
Nearly 5,000 active U.S. users were randomly assigned to use either:
The algorithmic “For You” feed
Or the chronological “Following” feed
They stayed in that setting for seven weeks in 2023.
This was not a survey about opinions.
It was a field experiment with real users inside a live platform environment.
Researchers measured:
Political attitudes
Policy priorities
Engagement
Account-following behavior
The actual content appearing in feeds
The study was conducted independently of the platform.
What Changed
The researchers did not find major shifts in partisan identity.
They did not find significant changes in affective polarization.
Instead, they found measurable changes in:
Which policy issues people prioritized
How they interpreted political investigations
How they viewed the war in Ukraine
Identity remained relatively stable.
Issue interpretation moved.
That distinction matters.
Why Turning It Off Didn’t Reverse the Effect
When users were exposed to the algorithmic feed, they saw:
More conservative content
More posts from political activists
Fewer posts from traditional news outlets
Many began following activist accounts.
When the algorithm was later turned off, they continued following those accounts.
Their feeds still reflected those networks.
The ranking system changed.
The follow network did not.
That is why the shift persisted.
Turning the algorithm off does not necessarily turn the influence off.
Why This Matters in Schools
Students are not just consuming content.
They are building networks.
We often teach:
Curate your feed.
Follow reliable sources.
Switch to chronological if needed.
This study suggests exposure influences who people choose to follow in the first place.
Once those follow decisions are made, they shape what feels important long after the ranking system changes.
This moves the conversation beyond misinformation.
It moves it toward system design.
Not Just About Polarization
One of the most interesting findings is what did not change.
The algorithm did not significantly increase partisan hostility.
Instead, it shifted issue priorities.
That means influence may operate through salience rather than identity.
For civic education, that is subtle and important.
If students’ political identities remain stable but their sense of what matters most shifts, the civic conversation changes quietly.
The Bigger AI Question
Recommendation systems now shape:
Social media feeds
Search rankings
Streaming platforms
AI chat responses
Personalized learning systems
If exposure shapes who users follow on social media, what might exposure shape inside AI-powered knowledge systems?
That is the AI literacy connection.
This study focuses on X.
The structural lesson applies much more broadly.
Important Limits
This experiment focused on active adult users.
The effects were statistically significant but modest.
Algorithms evolve constantly.
Political contexts differ globally.
Still, this is one of the strongest causal studies we have on feed algorithms.
It adds weight to a conversation that is often driven by assumption rather than evidence.
What I’m Still Thinking About
Would adolescents show stronger effects?
Does this mechanism operate differently in multi-party democracies?
How should schools address system-level influence without overreacting?
Is chronological ordering a meaningful safeguard once networks are established?
These are open questions.
They are worth discussing.
Closing Thought
Algorithms may not need to change who we are to influence what feels important.
They may only need to shape the networks through which information flows.
For educators and librarians, understanding that mechanism is no longer optional.
It is part of AI literacy.
Poll for Educators
After reading this study, what concerns you most? (Answer in the Comments)
Algorithmic amplification of certain viewpoints
The persistence of follow networks
The fact that turning it off did not reverse the effect
I’m not concerned but want more research
I need to learn more before forming an opinion
I am genuinely interested in how you are thinking about this.


