The only point I'd add is that it's not handling time evolution in wicked problems quite right. Agree that the noisy room is distorting the world in exactly the ways described. But what if we've been in there so long, and the world has become so distorted.. that reality itself slides towards the once-extreme positions? Easiest to see this with climate-change controversy since that is the way that sort of thing happens, regardless of whether you think it's happened yet. Cascade, phase change, and collapse don't just call a truce.
So you have to anticipate that, acknowledging the pessimist is actually right, and that systems are a real bitch. Then you point out that if we're already doomed, we have nothing to lose nothing by trying. Systems are complex after all, that's the whole problem.. so if we miscalculated on the doom, then bothering to try actually saves us. Checkmate pessimists.
And the money decides how to run the circus. Not for the benefit of all.
So it is a really hard problem.
It's an interesting initiative though. One that I also think could have unintended consequences that would additionally seed greater distrust in the media—which isn't necessarily a bad thing. But I imagine that the people who already sense this distrust and distaste toward the impression of polarization that the media gives are becoming less and less likely to subject themselves to the nude opinions of anonymous strangers online.
- how you does this handle the fact that a lot of accounts on social media platforms are bots that maybe controlled by a small number of people.
- how do we actually get this implemented?
This is showing how in the social media system the dynamics play out.
These people are unwittingly working for the platforms to drive engagement, often to the exclusion of any goal they might've had before the addictive aspects of social media kicked in.
I think we get less of this kind of behavior here on HN because each username is not bedazzled with metrics. You can see up vote counts for your own comments, but you can only infer those counts for others. The scoreboard is hidden, so it isn't triggering as much bad behavior from people who can't handle such things.
I think we could get even better behavior out of people if we never showed them raw counts of updoots, but instead only showed them metrics relating to their explicitly stated social graph, plus maybe one hop out:
> Alice and two of her friends like this
> Charlie likes this
It gives a sort of directionality to the feedback. Instead of seeking the high score as granted, likely, by a bot army, you learn something about Alice's corner of the social network. Maybe you should get to know Alice's friends better.
Because that loud 3% that are being harnessed by the platforms to drive engagement via content we all hate... Their primary sin is just that they fell for it. They're like alcoholics, if we want to help them into a mode where they're less problematic, we should hang out someplace besides the bar.
The tiny minority dominates the feeds because that's how the incentives for algorithmic driven social media are structured. Do we really expect Meta, X, TikTok to anything that could reduce engagement?
Good luck having any of the mainstream social media apps add the banner they propose.
>We Could Do This Now - Platforms already have a lot of these capabilities. They already survey users. They even know how to run sophisticated polls. There are a few technical details to work out (spec here), but this is not a hard problem to solve.
Why do you think something like this is not already implemented? Platforms literally profit from this division, so why would they be incentivised to do anything? What's needed is not a good gesture from the overly powerful platforms, is fast, hard and deep regulation.
I feel like the real problem is the people. Many of us just want to be told what to think to blend in with society, some of us demonstrate Dunning-Kruger publicly and a few of us really want to drive the polarization for clout and attention.
Everyday I see people promote increasingly stupid ideas on both sides, further pushing my believe that the only solution is to severely limit what government can do, therefore making all this discussion pointless.
1. cheating or being lazy with the sampling
2. Being a weasel with the phrasing to get the desired result
3. Being a push poll.
Still, a "trusted" poll is slightly better than a freeform "community note", especially if it sticks solely to how prevalent an opinion is.
Slashdot used random sampling in moderation 30-ish years ago. It worked OK, except that scores were used for very little (crucially they didn't even sort by them), and they had a more gameable non-randomized system to moderate the random system. And of course it was probably vulnerable to Sybil attacks.
(By the way, I guessed 4% for the number of toxic users)
It's the most disturbing thing I have ever worked on, there is much more out there than moste people realize and a lot of it uses deceptive dark patterns.
If somebody is interested in talking more about this or is working on similar things, always welcome!
Both Democrats and Republicans estimated 30% but actually.. only 10% of both sides supported political violence
That number is crazy in so many ways and the post is overly nonchalant about it. The "distortion" isn't what's worrying here
> The Majority Goes Silent - When the majority of people looks at the feed and assumes they're outnumbered, people will often self-censor.
That's not the same thing, is it? Here the majority is, say, anti-, but they are being frightened by a noisy pro- minority. They're moderates in the sense that anti- is the conventional position to take. But they have opinions. (They could also be in the minority, and this fear of speaking up would still be a bad thing.)
Otherwise, if they're truly moderate, but are frightened into silence supposedly, what would they be saying if they dared? "Everybody listen to me, I have no strong opinion on this matter"?
The part that annoys me about the toxicity, or repetetive and annoying topics on reddit, HN, etc. is not that I am unaware that the content is produced by a small fraction. (I underestimated the count! I guessed 2%)
It's that people espouse it: They upvote and retweet it.
> Both sides develop wildly inaccurate beliefs about who the other side actually is.
That was a guess I had for a while. People have a strawman version of their out-groups in mind and quickly map people to that if an unknown person says something that indicates they might be part of the out-group.
> What percentage of the other side supports political violence?
It would be interesting to see the in-group statistic as well: "What percentage of your own side supports policical violence?", in my experience people also justify very shitty behavior as long as its from their in-group. (This plays heavily into the first point of espousing all kinds of shit)
---
It would be interesting to see if the community check actually changes anything. But the actual data seems to be only possibly for very generic topics - those we have the data on already. Something that would not be available for daily-fresh topics.
For my personal sanity I simply left reddit and stopped opening comments on certain HN posts - of course that does not help with the societal problems. Unfortunately.
Hackers might be interested to know that there's an "open questions" section at the end of TFA. Some of it probably wants simulation, some wants theorems.
Camel-ai pubs/frameworks might be related and useful, for example: https://github.com/camel-ai/agent-trust
Several model checkers also have primitives for working with common knowledge. TFA puts it like this:
> Learning a fact changes what you know. Seeing it displayed publicly — where everyone else can see it too — where you know others can also see it, changes what everyone knows, and subsequently how they act.
An important piece of technical vocabulary, it really seems we need this to talk about a lot of problems lately.
Even when people do have strong opinions on a topic (and a moderate opinion can also be strong), most people have better things to do with their lives than to go around blasting their opinions to the world as a hobby. And the few in this camp that do are not very likely to be amplified by the engagement algorithms.
It's obvious from the hyperbole around the discourse alone that this moral panic has reached levels of derangement that far outclass any rational basis for judgement.
Does social media have negative consequences? Sure. Are people assholes on the internet? Always have been. Is social media the greatest and most existentially perilous evil ever conceived by humankind? No.
I think in ten years people will look back at this (on whatever strictly censored and regulated internet replaces this one) with the same bemused confusion as we do the Satanic Panic. And honestly in forty years, if technological civilization still exists, we'll find out how much of that was stoked by the CIA or other interests.
> The Majority Goes Silent - When the majority of people looks at the feed and assumes they're outnumbered, people will often self-censor.
> That's not the same thing, is it? Here the majority is, say, anti-, but they are being frightened by a noisy pro- minority. They're moderates in the sense that anti- is the conventional position to take. But they have opinions.
I don't follow your argument (which is different to the one in the article):
There's a small noisy pro-side, a small noisy anti-side and a majority, but not necessarily a moderate majority!
The article doesn't say anything about the majority being moderates, does it?
> Otherwise, if they're truly moderate, but are frightened into silence supposedly, what would they be saying if they dared? "Everybody listen to me, I have no strong opinion on this matter"?
Not necessarily true; there's a noisy pro minority, a noisey anti- minority and a silent majority. Who know if they are pro or anti or equally split?
And even if they were actually moderate, they could see opinions like "everyone should have guns" and "no one should have guns", and keep their majority moderate opinion of "people should be allowed guns depending on whether they cross some objective line into dangerous or neglectful behaviour".
That's both a moderate and a majority position, and yet you won't see it expressed in a forum because all the noise is being made by the two extremes.
The argument you're making is that the silent majority must necessarily be moderates, but that's not a requirement.
Take immigration or refugees - the obvious thing is that you're either for or against it. But there's so many things in between, so much nuance, etc. And that takes reasonable adults to think and talk about.
I think something that is not calibrated in the post and also missing in this reply is that believes and actions do not need to be aligned.
Both groups say around 10% of members support political violence, however no democratic president is pardoning wholesale domestic terrorists. And the 90% of republicans who condemn political violence are not repudiating, removing themselves or condeming the fact that far right groups are the most dangerous demo according to the FBI, or that most political violence occurs in rep states, or the direct correlation of the NRA infiltration into rep campaigning and mass shootings...
Like if you say you dislike violence but defend the system that creates the violence and pardon the people who commit the violence and share the table and take the money from the violent people... your "beliefs" are not worth much.
The whole conversation about out-groups is less relevant when discussing left wing policy due to the fact that it is not orchestrated AROUND in and outgroups. Right wing ideology is de-facto a ingroup political theory where some people must be excluded. When you add morality being justified due to being in group you end up with some very concerning politics where actions are judged on beloning to the group and not the morality of the action or the consequences.
See the blue collar protect the children anti abortion crew voting for a new york millionaire owner of a beuty pagent who was best friend with the worlds best known human child trafficker...
The believe system collapses the second you put the right tee shirt on, and that is what makes polling those people irrelevant. They simply will support whatever is in front of them as long as they belong to the in group. War bad in ukraine, war good in Iran. Taxes bad in 2018, tariff taxes good now. Sillicon Valley tech people all leftwing indian soy boys in 2016 now all alpha podcast ai cool guys who fund our president.
nothing matters as long as you wear the tee shirt
What we are monitoring are deceptive patterns on a text or transcript level. Deceptive patterns can be things like information inconsistency in one post, context shifts in one post that are used to reframe something, or video patterns like fake statistics or fake headlines that are not consistent with the main content.
All of these patterns are actual science backed psychological manipulation patterns and they are consistently used in the most viral posts we detect. My perspective after one year working on this is that the average media literacy is even lower than we think and that we build an evolutionary system with the social media platforms that is optimized to increase the performance of digital manipulation actors.
I just had an issue with the way that number was completely overlooked
Is traveling to Tokyo just to sprint across the Shibuya Scramble for a slightly less-crowded Instagram selfie really a model of the good life? Should someone like Zuckerberg have this level of control over the activities and minds of the human race? Is Mr. Beast a role model for children by industrializing the exploitation of human virtue?
Human social pressure and follower mindsets are part of the human experience but systematically gaming those instincts in real-time so money flows to a social media company at all costs in some strange digital sharecropping scheme is what’s new and the hierarchy of others trying to capture a small piece of that pie creates these distortions.
1. insanely low-effort to post 2. requires NO discernment, proof, credibility, or peer review to post 3. 'viral' in that opinions circulate because other people have interacted with them, not because they are right or meaningful. so bad news, good news, real news and fake news all travel at the same speed, lowering discernment even further 4. echo chambers are baked into the form. people are more likely to interact with content they agree with vs. content that is true or impactful. this creates circles of people agreeing with each other on increasingly niched-down topics.
it is extremely different from newspapers and television.
I mean wtf. Is this your parody account?
(then they started having shorts, so I cancelled youtube premium)
> Human social pressure and follower mindsets are part of the human experience but systematically gaming those instincts in real-time so money flows to a social media company at all costs in some strange digital sharecropping scheme is what’s new and the hierarchy of others trying to capture a small piece of that pie creates these distortions.
To what I think @krapp's point is: these dynamics are not exclusive to social media. At their core they're led by something far more primal than what social media only exacerbates. Governments are not as naive as the general public. Regulations effected in 2026 to "regulate social media" could have consequences on how information is spread among people in 2040.
You aren't listing problems intrinsic to social media per se, so much as how people choose to use it and how specific platforms choose to operate. The latter of which is a problem when Twitter, Facebook and the like optimize for engagement through controversy, but I think when we focus on social media as a whole we risk throwing the baby out with the bathwater in restricting human rights and the ability of people to network and communicate freely without interference by state interlocutors.
Every bit of hyperbole I mentioned is practically quoted verbatim from some thread or another here, it is what people believe, and you can't even bring yourself to approach me in good faith because I've committed wrongthink by defending the existence of social media even implicitly.
The CIA and other governments are running influence campaigns across social media. The links between the major social media platforms and intelligence agencies are well known and well documented. And civilization is threatened by numerous factors, such as our over-investment in AI and the mass deskilling and destabilization that will create, creeping fascism and increasing political violence in a multipolar world, climate change leading to mass famine, pandemics in a post-scientific age, etc.
But people want to destroy social media (and by extension, want to destroy the freedom of communication it allows) rather than bother to consider that the real problem is the same problem we've always had - government and corporate interests trying to control our lives and manufacture consent through fear and panic.
They ran the same playbook prior to social media but the process was so normalized because they controlled so much of the media and culture that no one really even noticed it. Now people notice but they can't distinguish between the symptom and the disease.
“The medium is the message”.
This stuff’s been around long enough we’ve got a pretty good idea of what its “message” is.
however i agree that the CIA and other governments are running influence campaigns on social media. i think that's been proven actually.
the answer, as always, isn't 'destroy decentralized communication' or public discourse online. it's to have tighter regulations on how algorithms are configured. what's pushed vs. what's suppressed because it's obviously intentionally inflammatory/trolling.
this is an issue requiring extreme nuance. but to say that being worried about how social media today affects society is like 'the satanic panic' is kind of absurd.
What's this mean?
Congratulations on the endorphin hit. You really zinged me. I need to find where the grownups hang out.
It began with a study. In December of 2025, Stanford researchers analyzed 2.2 billion social media posts looking for a pattern. They wanted to know what percentage of users posted severely toxic content. Not rudeness, not sarcasm, but speech that was so hateful that 90% of the world would flag it as being problematic.1
With this data in hand, they then asked thousands of people to answer a simple question:
Take a guess.
What percentage of social media users do you think post severely toxic content?
?
0%50%100%
They were surprised by the results. They had discovered an enormous reservoir of misperception that had been hidden in plain view.
Here is the simplest version of the problem. Imagine walking into a bar with a hundred people inside.
Three of them are shouting — about politics, about each other, about whatever gets a reaction. The other ninety-seven are talking at a normal volume. But there's a bouncer at the door, and he gets paid by the minute you spend staring. So he's wired the three loudest people into the sound system and turned it all the way up.
You walk in, hear the roar, and conclude: this place is full of lunatics. Never hearing the 97 people having normal conversations a few feet away. You could leave, but all your friends are inside. You're stuck.
This is how social media deals with contentious topics. The bouncer is an algorithm. And whether you like it or not, you've been a bystander.
Pick a contentious topic. This is what your feed might look like.
Reading this feed, you might reasonably conclude that the country is split between unhinged extremes. It is not. And the gap between what Americans actually believe and what the feed suggests they believe may be the most consequential thing the platforms haven't shown you.
Let's visualize this as a single room with 100 people inside. This is what it looks like:
97 regular users
3 users who have posted toxic content
3%→33% On most platforms, ~3% of accounts produce 1/3 of all content
Your feed Engagement ranking amplifies high-reaction content from the prolific few
▼ Your feed ▼
The actual room. 3 out of 100 users have ever posted severely toxic content.
This pattern repeats across platforms. On Twitter/X, toxic tweets receive ~86% more retweets and ~27% more visibility than non-toxic ones, 0.3% of users shared 80% of all contested news,14 and just 6% of users produce roughly 73% of all political tweets.16 On TikTok, 25% of users produce 98% of all public videos.15 The specific numbers vary. The dynamic is the same: a small minority of highly active users overwhelms the majority.
After a time consuming content in this room, your brain performs a kind of ambient demography. The feed becomes a sort of census. You conclude — logically — that the behavior must be widespread. The room might just be full of extreme people! Maybe most people do believe these crazy things.
If this were just about tone of our social posts, it wouldn't matter very much. But this distortion ends up causing some seriously bad patterns of behavior.
Pattern 1 The Majority Goes Silent
When the majority of people looks at the feed and assumes they're outnumbered, people will often self-censor.3 The dynamic replicates on social media17 — fear of social isolation suppresses opinion expression on platforms where it's perceived to be unwelcome. They go quiet, or they leave a platform entirely. They cede the space to users with more extreme politics.
Pattern 2 The Loud Minority Thinks It's the Majority
The minority who aggressively post end up with their own distortion – believing they are part of the majority.5
A study of 17 extremist forums found the same pattern: the more someone posted, the more they believed the public agreed with them. More engaged participation bred false consensus.
Pattern 3 Everyone Gets Each Other Wrong
Both sides develop wildly inaccurate beliefs about who the other side actually is.6 See how some of your own beliefs line up:
What percentage of Democratic supporters do you think are LGBTQ?
?
0%50%100%
What percentage of Republican supporters do you think earn over $250,000 a year?
?
0%50%100%
The distortion extends to policy beliefs. Step through to see the perception gap on the issue of immigration.
Source: More in Common (2019) & Moore-Berg et al., PNAS 2020. Illustrative.
Pattern 4 Politicians Follow the Perceived Room, Not the Real One
Elected officials are very good at sensing political sentiment. It's literally their job. (They are not elected to correct people's beliefs.)
Politicians who can build a coalition about a perceived belief are more likely to win. They position themselves against an opponent that doesn't exist, but their supporters think exists.
And remember: most of our politics now happens on social media. Candidates often read the same distorted feed. They are unlikely to change their minds.
The window of discourse shifts. Not because opinions changed, but because perceptions of opinions did.
Pattern 5 Misperception Turns into Hostility
When you believe the other side is extreme, you become more willing to treat them as a threat.7
Both Democrats and Republicans vastly overestimate how many on the other side support political violence. The result is a populace primed to assume the other side is ready to do horrible things.
"What percentage of the other side supports political violence?"
Democrats believe
estimate
35.5%
35.5%
3.4× off
of Republicans support political violence
Republicans believe
estimate
37.1%
37.1%
4.0× off
of Democrats support political violence
Both sides were wrong by 3 to 4 times. When researchers corrected these beliefs, partisan hostility dropped.
Each step feeds the next. The distortion is self-reinforcing.
Okay. So now you know that a small minority dominates the feed.
You know that Republicans and Democrats actually have a far more nuanced set of opinions about contested issues.
Does that fix it? Not really. You also know that everyone else doesn't know it. And if the world continues operating as if the distortion is real, you should probably act the same — even though you know it's wrong. The room hasn't changed, even if you know people inside it are confused.
This is called a common knowledge problem.
You've read the stat. But you have no idea who else has. The feed still looks the same. You still assume you're outnumbered. You stay quiet.
Steven Pinker lays this out cleanly in his excellent recent book When Everyone Knows That Everyone Knows.8 Learning a fact changes what you know. Seeing it displayed publicly — where everyone else can see it too — where you know others can also see it, changes what everyone knows, and subsequently how they act.
Social media has no public square. It has 300 million private windows, each showing a different distortion of the same room. Illuminating the common thoughts between us has the potential to radically change it.
So what can we do about this?
Fortunately, there's some good evidence showing how it can be fixed. Multiple studies show that when misperceptions are corrected in a public way, hostility drops. Mernyk et al. found that a single correction reduced partisan hostility for a full month.7 Lee et al. found that correcting overestimates of toxic users improved how people felt about their country and each other.1
We can do this today.
Imagine every post on a contested topic had a quiet link beneath it. Not a fact check, a label, or a warning. Instead — what if it had a Community Check?
↑ click here
A Community Check is an open-source design layer that could be deployed across social media, beneath contentious posts, to help users understand how other people on the platform (or the nation) actually feel about an issue.
It is a way of quickly adding context to the most hot-button viral issues, giving people more visibility into the opinions of the public.
Let's explore this intervention with a topic that cuts across political identity:
On the surface, this seems contentious. But it's actually a supermajority issue: 81% are concerned about the influence of money on elections, including 78% of Republicans and 90% of Democrats. 75% say unlimited spending weakens democracy. Only 15% believe unlimited political spending is protected free speech.
And yet, very little changes, largely because everyone assumes the other side is fine with it. The feed is full of people defending their team's donors and attacking the other team's. It might look like a 50/50 partisan battle, but it's not. It's a majority consensus that cannot see itself.
What if you could see this consensus?
@real_talk_politics · 2h
Everyone complains about money in politics but the second their candidate gets a massive donation they shut up real fast. You don't hate money in politics. You hate when the OTHER side has more of it.
♡ 11,847💬 6,203↻ 2,891
↑ click here
Community Check draws from a random sample of platform users + robust national polls, surveyed independently of the content. The sample is statistically representative. The results update continuously. And critically: everyone sees the same numbers.
Traditional fact-checking is a top-down approach that often feels like it's dictating from above. This is hard for people to stomach. Content moderation for many years now has been perceived as removing speech. This simply adds context, much like the crowdsourced feature Community Notes (an inspiration for this project).
Nor is this just a user-poll under a post. Instead it's drawing from all platform users, coupled with statistically significant national surveys. It's an actual window into the views of the majority, not just the views of those looking at the post.
Short-form video is the fastest-growing vector for political distortion. The same dynamic applies — a small minority of creators produce the vast majority of political content — but video bypasses the pause that text gives you. Community Check can adapt. Tap through to see how.
Money IS free speech.
Deal with it. 🇺🇸
Citizens United was CORRECT
@liberty_caucus_tv Follow
#FreeSpeech #CitizensUnited 🔥
♡5,247
💬612
↗1,742
A political video crosses the engagement threshold. 51K views, 612 comments (1.2%), 1.7K shares (3.4%). The feed shows outrage. But what do people actually think?
See technical specs for how it works below ↓
Platforms already have a lot of these capabilities. They already survey users. They even know how to run sophisticated polls. There are a few technical details to work out (spec here), but this is not a hard problem to solve.
The unseen majority is the public. And the public deserves to know itself.
A tiny minority, dominating the feed. That's all it ever was. The rest of us were here the whole time, quiet and decent and waiting to be seen.
Community Check is a free and open specification.
The complete technical spec, research base, and open questions are published for researchers, engineers, and platform designers to stress-test and build on. Please steal it with attribution.
References
1 Lee, Neumann, Zaki & Hancock, "Americans overestimate how many social media users post harmful content," PNAS Nexus, 4(12), 2025. n=1,090. Benchmark: Kumar et al., "Understanding the Behaviors of Toxic Accounts on Reddit," WWW '23, 2023. 3.1% of accounts produced 33.3% of all comments.
12 Yudkin, Hawkins & Dixon, "The Perception Gap," More in Common, 2019. n=2,100 via YouGov. Average overestimation of opposing party's extreme views: ~55% estimated vs ~30% actual.
13 More in Common, "Americans' Environmental Blind Spot," 2022. 73% of Republicans support U.S. clean energy leadership; Republicans estimate only 33% of their own party agrees.
14 Baribi-Bartov, Munger & Pan, "Supersharers of fake news on Twitter," Science, 384(6700), 2024. 0.3% of users shared 80% of contested news during the 2020 U.S. election.
16 Bail, Breaking the Social Media Prism, Princeton University Press, 2021; Pew Research Center, 2021. 6% of U.S. Twitter users produce ~73% of all political tweets.
How Community Check would work in practice, from data sources to platform integration.