Last fall, we featured an extensive interview with Petter Törnberg of the University of Amsterdam, who studies the underlying mechanisms of social media that give rise to its worst aspects: the partisan echo chambers, the concentration of influence among a small group of elite users (attention inequality), and the amplification of the most extreme divisive voices. He wasn’t optimistic about social media’s future.
Törnberg’s research showed that, while numerous platform-level intervention strategies have been proposed to combat these issues, none are likely to be effective. And it’s not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we’re probably doomed to endless toxic feedback loops unless someone hits upon a brilliant fundamental redesign that manages to change those dynamics.
Törnberg has been very busy since then, producing two new papers and one new preprint building on this realization that social media is structured quite differently than the physical world, with unexpected downstream consequences. The first new paper, published in PLoS ONE, specifically focused on the echo chamber effect, using the same combined standard agent-based modeling with large language models (LLMs)—essentially creating little AI personas to simulate online social media behavior.
Those simulated users were randomly programmed to either hold an opinion or its opposite and then interact randomly with selected members of a simulated online community. And if the proportion of community members who disagreed with those simulated users exceeded a given threshold, those agents were programmed to leave and join a different online community.
Filter bubbles: Not a culprit, but a cure
Consistent with last year’s results, echo chambers emerge naturally from the basic architecture of social media platforms. “One surprising finding is the fact that we get echo chambers even without any filter bubbles, even if people really love being in diverse spaces,” said Törnberg. “You don’t need an algorithmic nudge. You can still get these highly segregated spaces. The other surprising finding is that filter bubbles, which have been blamed for homogeneity, can be a cure.”
It doesn’t take much to destabilize or stabilize the system, Törnberg found. Even if the threshold for disagreement was quite low, disagreements were amplified to the point that each random interaction was increasingly likely to exceed the threshold. More and more users were pushed to relocate until what was once a community with a solid diversity of opinion rapidly became polarized and/or overly homogenous.
Conversely, if just 10 percent of users in a given social media community largely agree with your stances, you will be more tolerant toward diverse opinions that contradict your own. “There’s a certain chance that some users will end up in communities where it’s very homogenous and 99 percent of users are disagreeing with them,” said Törnberg. “That will cause them to leave, and you get this feedback effect just because of the structure of interaction. But if you have a filter bubble effect, where everyone is shown 10 percent of their own type, that creates a possibility for you to find the people who you agree with within the community. And that stabilizes the entire dynamics so it doesn’t tip over to one side or the other and become extreme or overly homogenous.”
Törnberg found some confirmation of those dynamics when he analyzed an actual online echo chamber: the subreddit r/MensRights. He found that members of the subreddit were more likely to leave if their posts diverged too far, linguistically, from the community’s center of gravity.
“Who are the users leaving the community?” said Törnberg. “The users that are more ideologically distant are more likely to leave. So it captures the same mechanism of feedback dynamics, where the community becomes more homogenous and more extreme because users leave—[and they leave] because they feel it’s becoming too homogenous and extreme. Eventually it tips over to one direction. And of course, as the community becomes more extreme, there’s this boiling the frog effect where the users who stay are influenced by the community and become more extreme.”
In principle, it could be possible to exploit these feedback effects to preserve viewpoint diversity—but there are caveats. “Ultimately, it’s about changing the fundamental rules of what people are seeing and being mindful of the feedback effects that always play out in any complex system,” said Törnberg. “That being said, do I want to tell [Mark] Zuckerberg to implement more filter bubbles on Facebook? I think I’d want a little bit more evidence before going that far. But it does highlight that we need to have a little more humility when it comes to our design of these systems and what the downstream consequences are. We tend to maybe think one step ahead, but miss the fact that these are highly complex systems, full of feedback effects that often do the exact opposite of what you intend.”
The “botification” of social media
For his second new paper, published in the Journal of Quantitative Description: Digital Media (JQD:DM), Törnberg relied on nationally representative data from the 2020 and 2024 American National Election Studies surveys, covering US citizens from all 50 states and Washington, DC. The objective was to learn more about shifting trends in how people were using (or not using) social media across all platforms, demographics, and political affiliations.
Törnberg found that visits and posting activity on Facebook, YouTube, and Twitter/X—what one might consider legacy social media platforms—showed marked declines. However, “My sense is that the number of posts on Twitter and Facebook has probably not really declined despite the fact that the number of people posting—humans who are alive and have a pulse—has dropped by 50 percent, because of the rise of AI and LLMs and the botification of those platforms,” said Törnberg.
Most social media platforms slightly shifted politically to the right, although they remained Democratic-leaning on balance—except for Twitter/X. In that case, “The engagement behavior was a 72 percentage point shift to the right, which is just insane,” said Törnberg. “It used to be that the more you posted on Twitter, there was a slight correlation with how much you liked the Democrats and how much you disliked Republicans—how effectively polarized you were to the left. Now it’s very strongly and very clearly correlated with hating Democrats and liking Republicans. So the graph appropriately becomes an X, which I guess is exactly what [Elon Musk] paid for.”
Meanwhile, on Facebook, posting behavior is correlated on both sides of the partisan divide and has more to do with how active the most partisan users are, prompting casual users to disengage so that those louder voices dominate, making the platform narrower and more ideologically extreme. “The more you’re effectively polarized, the more you post on Facebook,” said Törnberg. “That’s the social media prism or the fun house mirror of social media in action, because the most extreme voices are the voices that tend to post, and also they tend to become more visible because of the engagement algorithms.”
Reddit and TikTok were outliers, showing modest growth instead of decline. Törnberg thinks TikTok’s growth, in particular, indicates another interesting shift. “I think that there is a general transition from the text-based, interaction-based social media to this more fully algorithmic video, short video form,” he said. “So is it even a social media anymore? We tend to put TikTok and Instagram in the same basket as Twitter/X. I don’t think that really makes sense because we’re seeing a shift away from one form of social media to a new form of media platform that is fundamentally different.”
Is it even “social media” anymore?
That shift is the focus of a new preprint that Törnberg co-authored with University of Amsterdam colleague Richard Rogers. “When we talk about social media, there are certain assumptions about what it is,” said Törnberg. “It’s user-generated, and there’s a platform that organizes interaction, but the platform cannot produce content on its own. So instead the platform allows people to connect with each other, and it just provides infrastructure for that. The [terms] social network and social media is almost synonymous. Those describe pre-algorithm Twitter circa 2012 quite well.”
Now that more and more users are disengaging and often leaving those platforms entirely, the AI bots are moving in, often at the instigation of the social media platforms themselves. “We don’t need the users anymore,” said Törnberg of the reasoning behind such decisions. “We don’t need them to generate content. We can generate our own content and we can automate the users. So there’s a splintering of what used to be social media.”
Törnberg identified three new kinds of emerging online media platforms, starting with private or semi-private group chats like WhatsApp. “The social part has just moved into these private group chat features,” he said. Then there other protected communities like Substack, often organized around a certain influential leader, “where there are more boundaries to joining in such a way that bots doesn’t make sense. The dynamic and logic of those places are very different from social media and much more driven by parasocial relationships.”
The second category is what Törnberg calls algorithmic broadcasting media, like TikTok, Instagram, and even Facebook, to a certain degree, thanks to the Reels aspect. The third is users interacting with AI chatbots. “If you look at the data, it seems like about twice as many people are talking to a chatbot versus posting on social media,” said Törnberg. “It’s coming to replace a little bit of that function of sociality that social media provided.”
While setting up smaller private spaces online might seem like a way to reproduce the local coffeehouse/public square dynamic that we all ideally wanted social media to be, Törnberg says it is not. “The local coffee shop model is geographically local,” he said. “It becomes diverse because it is constrained by geographical distance. It forces a coming together of diverse groups because there’s one coffeehouse. A WhatsApp group is a non-local space. It’s precisely the example of a system that can tip over one side or another to become an echo chamber. Just because Meta doesn’t have the platform control doesn’t mean it’s going to not turn horrible.”
“Abandoning or fleeing responsibilities is not going to be the solution to the fact that digital technology is reshaping our society,” Törnberg added. “It needs functional scaffolding and democratic systems for doing it responsibly and actually pursuing positive democratic prosocial values, which is not something that is seemingly on offer at the moment.”
Törnberg does think it’s possible to reorganize social media spaces in positive ways so that most users can find that 10 percent of other users who agree with them, thus making them more open to divergent views. And it helps that most users really do prefer more pleasant online communities, not platforms rife with toxic waste. “But then how do we shape the rules to produce those outcomes?” he said. “It’s a much harder question. How do we create spaces that are both engaging and fun to use, but that don’t go down to that dark place because of all of these feedback effects?”
BlueSky’s highly effective blocking tools, and even Twitter/X’s community notes feature, which often bridges cross-partisan divides, provide useful examples of possible solutions, if judiciously applied. “We can think of and construct similar systems,” said Törnberg. “We just need to find ways of pushing those effects to a more positive place by finding the pivot points. This is what I’m studying right now. I just don’t have an answer yet.”
PLoS, 2026. DOI: 10.1371/journal.pone.0347207 (About DOIs).
JQD: DM, 2026. DOI: 10.51685/jqd.2026.005 .







