What the OP is talking about is bots that participate in public discourse. That's the actual problem.
I think it can be handled to a degree though. Private communities, private Internet on top of existing Internet, and social media platforms without public APIs and with strict, enforceable ToS would all help.
It's a shame though that this is gonna kill so many sites and projects. Sure we have ChatGPT, but also with things like Google AI summary getting so much better traffic to sites is going to plummet. Without people visiting I think the incentive, heck even motivation, for a ton of the sites is gone. We've seen it with sites like Stack Overflow, but it's probably going to happen to just about everything..
Things are definitely going to change in significant ways. The internet of the past is definitely dead, it just doesn't know it yet.
Anyone can still run a blog/website, and/or their own discourse server. There's no need to mourn for these centralized systems that largely existed only to exploit us in some way. Let's celebrate "small internet theory", an internet where exploitation is effectively impossible because every company that tries it is overrun with AI bots. That sounds awesome to me personally, but I was also up late last night watching clips of Conan O'Brien from 1999 and the nostalgia for that era / what the internet was like back then hit me so hard it was almost painful.
But now convincing fake video generation is easily accessible, so one more holdout stands to fall.
It does seem like some kind of ID system is going to be the only way. Sucky but inevitable.
I often have the following thought: technological advancement, for all its boons, inevitably leads down destructive roads in the long run. Sooner or later we open a pandora's box.
And i'm looking forward to none of them.
Obviously that burns down the human Internet, but it’s also a business that will have a short lifespan and bring about its own demise.
I guess they don’t care about anything enduring as long as they can grab some quick cash on the way out.
The elephant in the room is that a lot of social media companies have a conflict of interest. They can juice their user metrics by not moderating bots as well as they could be.
Also, I forgot to mention: google AI overview included the AI garbage page as it's answer.
It's dead Jim.
- banner blindness to blue check accounts (instantly scroll past, the blue check is extremely prominent visually)
- a very long Ublock Origin text filter regex for emojis (green check mark in particular) and $currentHotTopic keywords where the signal to noise ratio is close to 0.
The grand bargain of the web is gone and it ain’t coming back.
https://github.com/tanrax/org-social
:-)
>Can we go back to an internet like this? I guess we can’t.
Gary Brolsma is still at it with Numa Numa (2023) https://youtu.be/ZBKm1MBsTbk. There's just a bunch of other stuff out there too.
I wrote about it here: https://blog.picheta.me/post/the-future-of-social-media-is-h...
What's even funnier is this is literally how "agent teams" (the latest hotness) work. They just do it all on your laptop rather than spamming GitHub.
I think I'll just take up blacksmithing.
bring back the old internet
I think the age of algorithmic curation is dead - but it may, through a „RenaiSSance“, bring back true human connection.
2) Reddit... doesn't have much of an incentive to fix the astroturf issue. The site "organically" censors, a lot
As I see it, this is just an extra step in a long series of tools to just serve information more quickly. Search snippets for search results have always (?) been displayed for each link/page returned. If the information you were looking for was included in those snippets, then you wouldn't need to visit the actual site.
Then at some point there were knowledge cards/panels. Again, if the information you were looking for was in those cards/panels, then you didn't need to click on the links.
Now with LLMs/Gemini, the information is sometimes summarized at the top of the page. You need even less to visit the search results.
Google has always been a kind of cache for the Internet. It's just way more efficient at extracting and displaying information from that cache now.
So, yes, traffic keeps going down. But new knowledge will still need to be produced, right?
It’s not even like commercial astroturfing, it’s just karma farming and public sentiment manipulation.
The fundamental issue is that a plurality of humans pref the direction things have gone and are moving in. Is it a good direction? By this crowd’s standards, no.
To be clear, i dont like either but when i watch the speed kids swap between 5 insta accounts and 3 reddit accounts, it seems the majority are happy with it.
The good news is that the community internet - for the community, by the community - is just starting.
What is a community internet? The internet is layered protocols. UDP, ICMP, TCP, HTTP, HTTPS etc. The community internet is just a new layer of protocols. Coming soon.
I imagine it will be way closer to Ghost in the Shell/Cyberpunk in the end than we realize.
https://en.wikipedia.org/wiki/Dead_Internet_theory
Wasn’t WP supposed to be impartial and avoid passing judgement?
Mastodon wasn't really it and neither was Substack, although maybe it got slightly closer. TikTok and Telegram, maybe, for different reasons, but they'll face the same destiny.
I'd suppose the much despised "mainstream media" might be a winner here eventually. But beyond that, I am thinking about something like the following:
https://www.theguardian.com/technology/2026/mar/10/uk-societ...
I get nostalgia for the 90s/00s, but that time was never coming back anyways.
The best we can hope now is for people to be less online. And if it comes from people drowning in AI crap, I think it's kind of funny.
While back was toying with the idea of building out a new web on a new protocol (not http based). Thus no existing browser would understand it. Deliberately obscure to force a "Reset" button of sorts.
Though would be short lived, over time we've learned to ruin stuff faster and faster. I'm not sure there's any network so alien that it could hold on to that golden era of innocence from the past, it would be found then expediently and expertly exploited.
And the vast majority will just be driven to more AI-mediated interactions.
(anecdotally, my mother loves AI generated videos, perhaps it's just novelty at the moment and it will wear off)
“A social networking system simulates a user using a language model trained using training data generated from user interactions performed by that user. The language model may be used for simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased” [1].
(More seriously: https://en.wikipedia.org/wiki/Dead_Internet_theory)
And those will also get chocked with fake bot "members" and bot comments.
Plus, if "anyone can still run a blog/website", this includes bots. AI created and operated blogs/websites, luring in people who think they're reading actual human posts.
[no, not that gemini]
Occasionally, someone mentions RSS as a solution. That's only a small component of the solution.
Is it though? I have absolutely no doubt we'll get there but I haven't seen any evidence of this in the wild. My Youtube feed is becoming overrun with content with clearly generated scripts and often generated narration. But I haven't seen a single instance (that I'm aware of) of generated video being passed off as real.
Yes I have seen hundreds of tweets and reddit posts showcasing game-changing video technologies like AI face replacement and yes they look incredible in the 45 second demo reels, but every instance I have seen of real-world usage was comically bad.
The issue, as I understand it, is literally a new Eternal November, just that instead of “noobs” there are “clankers” this time.
Personally, I don’t give a flying fuck about things like gender, organs (like skin or genitalia) or absence thereof, or anything alike when someone posts something online, unless posted content is strongly related to one of those topics. Ideas matter no matter who or what produces them. Species fit into the same aspects-I-don’t-care-about list just fine - on the Internet nobody knows^W cares you’re a dog. Or a bunch of matrices in a trench coat. As long as you behave socially appropriate.
The problem with bots is that they’re not just noobs - unlike us meatbags they don’t just do wrong and stupid things but can’t possibly learn to stop (because models are static). Solving that, I think, is the true solution, bringing Internet back to life. Anything else seems to be just addressing the correlations to the symptoms.
(Yea, I’m leaning towards technooptimist and transhumanist views - I was raised in culture that had a lot of those, and was sold a dream of a progress that transcends worlds, and haven’t found a reason to denounce that. Your mileage may vary.)
Google People[1]?
But it's not about the current generation of addicts. It's a play to capture the next generation.
It remains to be seen whether they'll get caught or not but it's important to remember that even if all of us mature humans find this new AI social media weird and gross, children don't have our preconceptions.
Meta is going to do everything in their power to train the next generation of young, immature brains into finding AI social media normal and addictive.
They (along with TikTok) already managed to do that to the last two generations so they have a scary track record here.
I'm thinking there might have been a deeper message than the moment of ridiculousness.
Now, there are tools to achieve that kind of moderation automagically, and even better, consistently. This is an opportunity to build out a community that is useful for everyone. The first platform that guarantees anonymity supported by human-independent moderation will likely attract significant and persistent user support.
There is still the issue of cost - how does the community pay for such a platform? Perhaps like the Google of yore - very limited ads? Avoiding enshittification can be done through the Wikipedia model - non-profit to manage the whole thing?
But isn't it even harder for small forums to resist the robot onslaught without the trillion dollar valuations to fund it?
Although, part of the reason Facebook/Linkedin/Twitch/etc have bots is because those companies secretly want them, in order to inflate their usage numbers.
Aggressive moderation? Disable UGC?
If you make the price high enough sure, but I'm unsure you can find the right price to simultaneously 1) deter bot traffic and 2) be appealing to actual users.
Or if I want, I can verify that I'm myself, and eschew anonymity, and certain platforms should only accept contributions from people who don't hide their identity.
Everyone knows who you are in the town square.
The corporate internet was never good to begin with, it was just forced on the masses.
Actually, if I'm thinking about it. Social Media platforms already started this with the paid blue badge for verification, and it's also monthly subscription. But it's for their respective platform only, not universal.
As far as I can tell, that is basically all AI-related businesses. Including those non-AI ones jumping on the bandwagon to throw all their employees in the bin and expect 10x productivity somehow: if they are right and these tools do become that good, well the economy as we know it is over as white collar knowledge work disappears.
But hey, we made money in those few years right!
Wild-ass business idea: what if Yahoo 2026 recreated Yahoo 1996 and also any of the video sites it bought up back in the day get relaunched as deshittified ad-selling mechanisms to fund the whole thing… there’s gotta be Yahoo 1996 money in whatever scraps YouTube is missing.
It used to be faster and easier to follow actual content.
LLM’s for all their faults are well-trained to produce what we want.
Or maybe we have finally accepted that our entire economy is the naked emperor.
A. People want to connect with other people, not talk to computers, and
B. AI slop peddlers know this and have an incentive to lie about their content.
If GenAI content was always reliably declared and people's choices were respected, we wouldn't have a problem.
It's like saying, what does it matter if the news article was fake, as long as you enjoyed reading it? It matters because when I read the news, I want to read about things that actually happened, not stories that manage to fool me into believing they're true.
https://en.wikipedia.org/wiki/Talk:Dead_Internet_theory#c-Bo...
Internet promised ability to connect with anyone anywhere around the world. It felt limitless and infinite.
Turns out in an infinite world, the loudest voices are the ragebaits, the algorithmically-amplified, or the outright scammers.
Human social brain doesn't work in an infinite world, it works for a Dunbar's Number world. And we all like our psuedo-anonymous soapboxes (I'm standing on one right now), but trick will be to realize that the glitter of infinite quantity isn't the same as small-scale connection.
But I wonder if there's a size of conversation after which people will still choose AI assisted summaries. Discord had/(has?) a feature where it used LLMs summarize and then notify you about a discussion happening.
I actually think it’s more about getting people off browsers and other tracking software.
50% of US teenagers describe themselves as terminally online.
Go any place where people work and have time to goof, and you'll see them online.
Go to a bar/club, you see people with a phone in front of their face.
The idea there is an online and offline is crumbling further every day. Cameras are small, bandwidth is high in relation to our compression algorithms. Anything happening in the world can be broadcast live. More and more types of machines are coming online that accept digital instructions that make things happen in real life.
Furthermore it's an odd rejection of the printing press on your part. That methods of information exchange affect the real world around them. If the book brought about the industrial revolution, what does an always available global communications network bring?
At least based on your writing here on HN it seems like you're probably an introvert, or at least a person that likes quiet pondering and reflections. Reading a book would be far more interesting than most online activities, right? If I'm right and that is the case, then you may be missing just how many people are horrifically addicted to being on social media all the time.
Year or so ago I took an Uber and was mesmerized by the driver. He had his phone up mounted on the left and was pretty constantly interacting with it. Checking for new rides, watching a video, checking facebook. It was quite impressive how much content he consumed while at a red light and how dexterously he navigated to and through like 10 different apps.
I very much got the feeling that this was a person that was terminally online and suspected that he's not alone. A bit alienating really, living in the same country speaking the same language but realizing there's this huge cultural/behavior divide between us.
> buying things was hard
This one is not a problem anymore.
There comes some hypothetical point where technology has advanced so much that anyone has the power to destroy the world.
That is a simple method in phpBB. Using ranks one can set new accounts to be able to post and nobody can see their message until verified by a moderator. For small groups and semi-private (invite only) forums this is fairly easy to manage. Spammers and grifters influence nobody. Only cranky old bastards like me see the message. There are other means to keep bots off a tiny site but that is a longer topic. Even better one can send a header to redirect those using the Torbrowser to the Tor link and when states come along and demand some third party process, one simply disables the Clear-Web access. More friction, less data leakage and no corporate capture. This also eliminates the people that can't handle an extra step to access the site and eliminates lazy governments that need money trails.
the individual user is now priced out and cannot speak candidly and anonymously, while large, wealthy orgs simply price that into their market-capture and consensus-building techniques
Dead internet is the prequel to dead world, let's seize the opportunity to learn how to coexist with synthetics and develop the code that will make life with a higher intelligence species possible on Earth. And remember, we humans vary widely, and just like there are people happy to share LinkedIn slop today, there will be humans gladly living surrounded exclusively by overpowering synthetics. So lower your expectations for universal solutions and focus on niche.
Many years ago I left a small town and moved to a big city for this exact reason.
There, sadly, needs to be some gatekeeping and then it can work.
For example I'm member, since years, of a petrolhead forum where it works like that: a fancy car brand, with lots of "tifosi" (and you don't necessarily want all these would-be owners on the forum). To be part of the forum you must be introduced by some other members who have met you in real-life and who confirm that you did show up with a car of that brand.
If you're not a "confirmed owner", you can only access the forum in read-only mode.
It's not 100% foolproof but it does greatly raise the bar.
It's international too: people do travel and they organize meetups / see each others at cars and coffee, etc.
Or take a real extreme, maybe the most expensive social network: the Bloomberg terminal. People/companies paying $30K/year or so per seat each year probably won't be going to let employees hook a LLM to chat for them and risk screwing their reputation. Although I take it you never know.
It is the way it is but gatekeeping does exist and it does work.
The bots exist for a reason, usually to covertly advertise a product, and by themselves already cost money to run. Someone looking to astroturf their AI B2B SaaS would probably be more willing to pay $10 to post than a random user from a less wealthy country who just wants to leave a comment on an interesting discussion.
centralized and decentralized would include almost any service. your comment is so vague and ambiguous as to be meaningless. (that's a hallmark of LLM output. are you a bot?)
it was easier to find authoritative answers 20-30 years ago. google and, before that, altavista and yahoo, were quite good at directing queries to things like university-run information sites or legitimate, curated commercial sites. for the last decade the first google page has been crammed with useless SEO optimized fluff.
as for shopping, that was the first dotcom boom. what really took it mainstream was covid. not centralized or decentralized collaborative nonsense.
Yes, they are disincentivized to get rid of bots.
I understand that some may feel we are losing something, by not being able to go onto a website and anonymously talk to 1000s of other anonymous people we do not know, but I do not think that has actually been a net positive and this bot issue demonstrates the issue quite well: if you do not know who you are talking to, you do not know if they are telling the truth, or if they are someone you should even listen to at all, and now they might not even be human. So why do it? I would rather talk to my friends, people I've met in meatspace or over voice chat in a game, people who I can vouch for and that I know I can respect and trust.
Let's build small communities of real friends who recognize each other and spend time with them on the internet, in that way the internet will never die.
What we're missing is a way to have cryptographically secure pseudonymity: you log in to a website, you don't give any information whatsoever, but you cannot make two different accounts.
Applied ZKPs are being actively worked on in the blockchain sphere.
Bittorrent trackers, as absolute retarded as they are, have performed this experiment for us and the lesson we're supposed to learn is that this does not work. Someone, somewhere, has an incentive to invite the wrong sort eventually, which because of the social network graph math stuff, eventually means "soon". Once that happens, that bot will invite 10 trillion other bots.
All it takes is one invited user to open the door to bots.
Could we just add complex and varied captcha to the comment & posting forms?
History also shows you can take a $10 fee and maintain quality on SomethingAwful for quite some time.
A good example is this, car companies don't make cars for the most part, they make loans. Financial companies first, car companies second.
Consolidation, collusion, and rent-seeking behaviors by companies are going out of control too. The fact AI companies can do what they are doing has much to do with the previous brick and mortar businesses weakening any business regulations down to nothing.
So, most posts on social media aren't real.
Most user posts on non-social media are spam/not real.
Most websites in searches are copies/ad spam.
So yea, dead internet reality.
Hence you'll end up with defectors getting paid to siphon off all the conversations to some ad companies that will work on tying them with real world identities and then serving them more detailed ads in the places they cannot avoid interfacing with the open internet.
Invite only, very exclusionary. Private club with public posting? Worst of both worlds.
And how do you create this without it being overran by bots, spam, and people posting gargantuan amounts of porn?
>allow people to control their own databases
There are two types of people that want to control databases. 1: The freedom seeking type who want information sovereignty. 2: The type of people that want to hoover up as much data as possible for money and power.
Guess who has more ability to control the world out of those two.
Lastly, most people want to use curated websites free of spam and content they don't want. Almost nobody wants to do that curation themselves. Hence curated platforms will attract the most people via network effects.
ah shoot, that wasn't lastly...
> getting people off browsers
and putting them on what exactly? phone apps, that's not better at all. Multimedia attracts people like flies to poop. It's seemingly a natural human response to move to an application that is more visually interesting regardless of it's security safety.
But plenty has worked on a smaller scale. Raph Levien's Advogato worked fine.
There's also a reason most new social networks start up as invite only - it works great for cutting down on spam accounts. But once they pivot to prioritizing growth at all costs, it goes out the window.
There's just the pesky problem of incentives on the other side of the coin - who gets the $? The spammee? But there would be enshitification issues like:
1. Those who are incentivized to take as big a cut as possible.
2. Those who would put it in their EULA that you must accept their spam and not chargeback or else you lose access to something you value like their services (EULA Ransom... not much different to today "accept our EULA or lose access to what you've already paid for!")
I'm sure there are many other perverse incentives which would creep in..
Invite only. You get a number of invites per year etc. And once a year an open door or so
"my2cents"
0.02 to post or send a message
I get that this is true from a certain point of view. But car companies clearly compete in a very healthy way on features and quality.
In fact, cars are a great example of a market where the companies clearly care about making the product, and the competition between them has driven that products to incredible heights. Cars these days are vastly better than they were in the past.
I recently invited a job applicant to a first-round interview. Their CV looked promising and my AI slop detection didn’t go off. But then I got this reply:

This made me realize that the dead Internet arrived faster than expected. A few other purely qualitative examples confirmed the feeling.
HN now restricts ShowHN for new accounts after an influx of vibe-coded and low-quality ShowHN submissions.

Coincidentally as I’m writing this, HN also just updated their guidelines with the following rule:
Don’t post generated comments or AI-edited comments. HN is for conversation between humans.
When I revisited an old Reddit post about a sideproject of mine, I found bots clearly astroturfing a SaaS product in the comments. These profiles hide their comments on their accounts, but it’s easy to find hundreds of similar comments.

On the rare occasion I open LinkedIn, my timeline is mostly AI-generated slop among very few actually interesting professional updates.

And of course let’s not forget AI spamming OSS repos with nonsensical PRs. What’s even funnier is when the reviewer turns out to be AI too.

Can we go back to an internet like this? I guess we can’t.
And 10 minutes later Texas demands you identify all your users age when someone posts a porn image somewhere. Facebook will gleefully laugh all the way to the court saying we need such internet ID to entrench themselves.
>, in that way the internet will never die.
You mean in the exact way the internet used to be... then died?
I'm guessing your GenX or a Xennial, it's how we think. Relationships and friendships are hard things to acquire and keep and you have to work to do it otherwise friends disappear. The thing is the younger generations mostly don't think that way. They have mostly always lived in a world where connections are cheap and easy to maintain. Attempting to move to a system that is more difficult will be very difficult for them.
Even if it's some kind of government encoded key, governments cannot be trusted to create imaginary people and hand them out to companies like palantir for large scale population manipulation.
Unlike most public trackers which are either dead or on a life-support, member-only and invite-only sites are still kicking.
And you are personally responsible for your invitee
When AI can post a million times a day the internet is FUBAR.
By not creating public networks out of it.
The only database people want to control is their personal information and who they communicate with and when. So we should enable low barrier to entry to communicate.
I cannot solve for the social media side of it, but we can enable people to at least have low friction when getting online. Somewhere to turn to that is not a data harvesting service.
The displacement effort has to come from those that believe in those freedoms. It’s not easy, maybe impossible in some circumstances but this status quo right now cannot be it, in my opinion.
In any case, it is still better than the status quo where even foreign authoritarian states can do that in countries where the local government wouldn't.
That doesn't make it wrong, it just might make the last 20 years a mistake.
We're scratching our heads wondering why there's no forward motion when it's simply that no one is pushing it.
And check these books "Superbloom: How Technologies of Connection Tear Us Apart" and "No Sense of Place", maybe it would help you to see the overall effects of the internet (and other communication mediums) and forget this simplistic view that a lot of programmers have. The nature of the communication medium doesn't just affect the message, it shapes everything in society. Ignoring that because you had a good experience here and there won't change anything.
I am also part of some very niche communities on the internet, and although they are small they are certainly thriving.
Assuming the money isn't wasted and is actually used to fund moderation 10$ is probably comfortably above the cost to detect and ban most malicious users.
Also, if the bots are smart, they'll add real people too and take them down with them.
TLDR: Mail storage is the sender's responsibility. The message isn't copied to the receiver. All the receiver needs is a brief notification that a message is available.
The few I'm part of all have a real community (like in the net of old), civil conversation, and verified, quality materials being shared. Almost everybody behaves and doesn't abuse the invite system, because nobody wants to lose their access to such a wonderful oasis among the slop web. It's a great motivator to stay decent and follow the rules. When things go bad, it's usually not because of malice, but because someone got their account stolen. Prune the invitee tree and things are mostly under control again.
At the end of the day there is no real penalty for being a bad actor on the internet. They get unlimited retries on spamming and otherwise causing problems. In many ways this helps Google entrench itself as the search/ad company. No one else has the money or compute resources to continuously update the internet. Furthermore they have told us it's their job to shove unskippable ads in our faces. They'll gladly let the public internet die in the future if they can push out their own version of "SafeInternet by Google/now with more ads!".
That's some of the boldest optimism I think I've seen in awhile. Maybe your blog is more popular than I assume, but still
They haven't added or really changed anything since the acquisition AFAICT, it's just trucking along exactly as it was the day Zoom bought them out. Twitter account proofs were broken by the API changes years ago and nobody is at the wheel to fix or even just deprecate them.
Did you miss “Zoom”?
Entire trees of invitees, going back months and years, are pruned. Mercilessly, indiscriminately, and self-servingly for the few people privileged enough that they are above suspicion. And if you're unlucky to be on the wrong side of it, there's nothing like an appeals process.
>and doesn't abuse the invite system
That's wild.
>When things go bad, it's usually not because of malice,
I never said it was malice. It's because the system itself is pathologically flawed and there's no way to make it work.
Your tree could for instance be pruned - you can still invite people, but the people you invited can no longer invite people.
There are not a lot of sites which have tried this and failed. Those which have tried to be even a little bit clever about it, have succeeded pretty well (Advogato was a really early example).
What there have been, are sites which rejected such restrictions after a while, because they would rather have a big number to show to investors than real people. Many have even run the fake accounts themselves (e.g. Reddit).
There are large swaths of spammers that indeed would not pay it. There are on the other hand plenty of NGO's that would pay it without a second thought to promote specific topics and dogpile on others. Those are the movements I would expect AI to take over if not already. AI does not sleep, humans do. AI won't miss the comments that groups believe need to be amplified or squelched.
And indeed, it is to be expected that some countries be banned from most of the internet, or at least get a read-only version of it, because their digital credentials aren't deemed trustworthy enough. Not unlike how the travel visa system works nowadays.
Really?
https://www.cnbc.com/2026/03/08/social-media-child-safety-in...
https://en.wikipedia.org/wiki/Social_media_age_verification_...
Please wake up, won't be long before someone fires off a lawsuit at HN and we'll have to give identification here.
You've (unironically?) restated the crux of the Dead Internet Theory.
https://en.wikipedia.org/wiki/Dead_Internet_theory
Authentic human activity has been completely overwhelmed by bots and slop. Discerning signal from noise becomes too burdensome to bother with.
Of course the physical medium continues to exist.
Of course there are still humans, such as yourself, producing free content, to be harvested and regurgitated by parasites.
But authentic human activity is increasingly going out of band, no longer discoverable. Whatsapp, discord, private groups. Exactly as the theory predicted.
How far up the tree do you kick? Going too far up makes it so malicious people can "sabotage" by botting to get huge swatch of legitimate users banned.
Going to shallow means I just need to create N+1 distance between myself and my bot accounts
I do not know what "move the needle" means or why you think I am trying to do that. Your excessive negativity and pessimism is unwarranted and I dislike it. Honestly between you and that other guy replying to my comments with seemingly thinly veiled vitriol for my perspective, it's just further proof of my point that being able to communicate with large groups of anonymous people is typically a net negative. Most anonymous people seem to be quite nasty. I'd rather write on my blog where no one like you will see it, and if you do see it, you likely won't go out of your way to send me an email with your negative comments because it's likely you do this for public attention.
Tbh, for niche hobbies even one new visitor a month is a win, if they actually read the article and not skim over it. An eager enthusiastic listener is a price not easily won on the internet. Having even one per month would mean you personally taught something to a classroom of peers in a meager 2 years. Blogposts easily can move live ten times as longer.
For people that spend most of their time on small internet, sites like that are essential, because they work on another level. You know you engage with someone who has a passion for the same things you do, and had a time to polish their words. You know you can reach out for help and be kindly greeted.
This is parts of the internet that are so boring for anyone else, they are totally safe from spam and ads. That doesn't scale, can never scale, if anything like that becomes popular, the massive slopfest would follow and the slop would be sold instead of the original.
And yet those boring places – boring for everyone not interested enough – are there, and people have a way to reach to each other and talk to each other about shared interests. The internet isn't dead for nerds.