I kind of expected this. The way some of these people work, if the site isn't an instant unicorn, it's trash. But if the goal is a good community, that is something that takes time to build and should grow slow. The incentives are all backward.
Moderation was really hard. We didn't have AI posters, but there were persistent posters who were extremely annoying (mostly in their post volume and long-windedness) while still following the rules. I was really trying a hands-off approach with moderation, and it seemed to be working for the most part. It's all moot now though.
> We're not giving up. Digg isn't going away.
Post title is misleading.
This 1000x times
I don’t understand what kind of shenanigans transpired. But it seems there’s more to in than “bots”
If it truly is bots, maybe a private invite only social network is the way to go.
Thanks for the fun this past year Digg.
i really enjoyed the new digg
100% that entire page was written by an LLM. So fucking obvious and I’m so tired of reading the same awful writing style with all these corporate spiel rants. If you don’t care enough to write something yourself, just don’t even bother.
I feel like the moderated subforum is a fundamentally broken system for dealing with content. I much prefer the Federated / X / Instagram approach where I can deal with users and have the tools needed to curate my own content, instead of relying on some ideologically captured no-name account that chooses what I can or cannot see based on whims.
Every site that is driven by user posting seems to be headed towards being overrun by AI bots chatting with each other, either for sake of promoting something or farming karma.
And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.
Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle. Will people still use ChatGPT to get product recommendations without Reddit posts and Wirecutter providing good content for those recommendations?
There was a lot in the new digg that I was concerned or at least not optimistic about but come on - are we even going to try anymore?
Digg.com Is Back - https://news.ycombinator.com/item?id=46671181 - Jan 2026 (10 comments)
Digg.com relaunch public beta is live - https://news.ycombinator.com/item?id=46623390 - Jan 2026 (18 comments)
Digg.com (Relaunch) - https://news.ycombinator.com/item?id=46524806 - Jan 2026 (3 comments)
Digg.com is back - https://news.ycombinator.com/item?id=44963430 - Aug 2025 (204 comments)
Digg is trying to come back from the dead with a reboot - https://news.ycombinator.com/item?id=43812384 - April 2025 (0 comments)
Now it's gone, again. Without a head's up or a way to get a backup out of it, it seems like. Can't say I am a fan of that.
The original Digg excepted, Kevin Rose's attention span is extremely limited. He will give something ~3-4 months of attention before (apparently) getting bored and wanting to move on to something else.
Up until that point, he will be an unrelenting hype man of whatever his attention is lasered on at that moment.
Then the hype posts start to drift. They show up once every few days, then once a week, then stop entirely. Any criticism or skepticism is considered a buzz kill in the cloud of good vibes only.
A few months later, a dramatic explainer post arrives (underestimating the cold start problem? Really??), outlining why the idea didn't work and why the next one will be better, for sure, for real.
This (AI generated) note from the current CEO paints an optimistic picture, but the most likely outcome will be that Digg simply doesn't launch. It's sustained on the nostalgic vapors of the old guard, not renewed by a replenished sense of purpose, or connection.
I'd say I'd love to be proven wrong, but I personally question the utility of a Web 2.0 social network phoenixing itself. We have endured a decade+ of originality being buffed out of web products, most now resembling variations of Bootstrap and shadcn in service of dev convenience and getting rich quicker.
Surely in the age of vibe coding, we can afford to take creative risks again, and think of something new.
Am I completely off base or did they use AI to write the post complaining about AI?
So people would go through one hurdle in life, to get this id, and reuse it for every service.
Is this a worthwhile idea? I know many have tried, so help me poke holes in it.
To be fair, I don't know Kevin Rose personally, so maybe he knows more than the industry, but I highly doubt it.
Reddit has the same problem. They are fighting it more or less successfully. I would look more in that direction.
I guess that in an ocean of upvote-based platforms, an island of hand-picked content was a welcome change -- at least for me.
The move (back) to a reddit-like site never made sense to me. Hopefully what comes next has real value to the users.
We really need some way to "verify as human" in the next coming years.
I was an avid Slashdot user way back in the day, but the site was basically the same throughout the day, and I wanted faster updates. Digg did this perfectly for a time, but eventually I migrated entirely to Reddit (even before whatever that drama was that caused a big exodus from Digg).
I think Reddit right now is the sweet spot: up to date information, longer-term articles to read, and easy to catch up on things I missed. I was recently pressured to sign up for X (or Twitter or whatever), and I had to turn off all of the notifications since I was constantly spammed with "BREAKING: X RESPONDS TO Y ABOUT Z!!!!"
Right now having Reddit for scrolling and Hackernews for articles+discussion feels like it works for me.
Ironic, they use AI in their shutdown post that blames AI.
I think the HN title needs adjusted
The only website which became totally useless for me after the general availability of LLMs is OkCupid. It's indeed dead. The rest are fine.
What am I doing differently compared to everyone else?
I'm regularly using: telegram, whatsapp, wechat, hackernews, lobsters, reddit, opennet.ru, vk.com, pornhub, youtube, odysee, libera.chat, arxiv, gmail, github, gitlab, sourcehut, codeberg, thepiratebay, rutracker, Anna's archive, xda-developers.
facebook and twitter became broken for me, but not because of bots, rather because of the "smart feed" ("the algorithm"), which is hiding all posts of my friends and promotes incendiary garbage.
In other words, I am seeing enshittification full-scale, but not the bots.
Digg isn't just here again. It's gone again.
The LLM style is like nails down a blackboard, are people blind to it or do they just not even read the stuff they're posting?
It was fine, people talked about work, personal stuff, travel, until one person posted about their disappointment that their state was limiting various services or rights to gay people. For them this meant their rights were in question and they were understandably upset.
Immediately some folks cried politics and that they shouldn’t post about that sort of thing.
To the user posting it it was about their life…
I don’t think “no politics” rules really make much sense. For someone it’s more than politics, and IMO because a topic is touched by politicians or government shouldn’t make it disallowed.
Basically incentivizing those who feel strongly about things to just pay up to talk about them in an exclusive area, which also keeps the site ad-free. Been apparently working for 25 years.
You thinking that astroturfing only happens for US politics is dangerously naive.
2/ Spammer can hire real people to farm accounts
I think this idea might work if we
- create reputation graph, where valuable contributors vote for others and spread reputation
- users can fine-tune their reputation graph, so instead of "one for all", user can have his personal customized graph (pick 30 authorities and we will rebuild graph from there)
https://news.ycombinator.com/item?id=39046023
Apparently the reason why their articles were interesting was because... they copied all of their content from DamnInteresting. Once they were called out they stopped, and the quality went downhill.
I'm a bit surprised with Alexis' involvement they didn't anticipate the bot problem. Alexis left reddit several years ago but I'm sure he's still in touch with the folks who run the place. It would've been worth it to talk to them about the threats they currently face and how they deal with them.
I suppose bots could find forums that use the most popular software and still make accounts and spam, but it would be much more obvious and less fruitful for someone to spam deck builders in Vancouver (something I saw often on Digg) on a forum that is focused on aquariums owners in the midwest.
I don't believe there is any practical way to do it.
Sure, there are ways to verify a human linked to a specific account exists in a one-off fashion, but for individual interactions you'll never know that it isn't an LLM reading and posting if they put even a small amount of effort to make it seem humanish.
There are decent small communities I'm a part of but the trash feels like it is encroaching.
And the notifications you describe are exactly reddit's notifications? "your comment received 10/20/50/100 upvotes!" "x responds to y about z" "News is trending"
> Ironic, they use AI in their shutdown post that blames AI.
This… seems like regular prose to me. What makes you say so confidently it was written by AI?
YouTube comment sections are botted.
IMHO Reddit would be better if it had AI moderators that strictly follow a sub's policies. Users could read the policies upfront before deciding whether to join. new subs could start with some neutral default policy, and users could then propose changes to the policy and democratically vote on those changes.
Are you sure? My understanding is that accounts were only allowed to create two communities.
This happens now on Onlyfans too. Content creators hire agencies which in the best case outsource chatting to "customers" to armies of cheap labour in Asia, and the worst case use bots.
The dead internet theory [1] is probably not just a theory anymore. HN recently made a policy to not allow AI posting and posters, but do you honestly think that's going to work? I would place a bet that a top HN poster within the next year is outed as using AI for posting on their behalf.
Which, in fact, would open up the same rat race with determining which accounts are real and so forth.
Not disagreeing with you, just circling around this same problem. Feels like the world still isn't ready yet.
One need only remember how easy it was to take over IRC channels with a few hundred bots to see the endgame of this rationale… it cannot be patched out, it’s inherent to the internet.
That which would make a vote valid; can (and will) be gamed.
That limit wouldn't stop you creating more communities with more accounts anyway.
Creative loop moves inside the agentic chat room, where we do learning, work, art, research, leisure, planning, and other activities. Already OpenAI is close to 1B users and puts multiple trillion tokens per day into our heads, while we put our own tokens into their logs. An experience flywheel or extended cognition wheel of planetary size. LLMs can reflect and detect which of their responses compound better in downstream activities and derive RLHF-RLVR signalling from all our interactions. One good thing is that a chat room is less about posing than a forum, but LLMs have taken to sycophancy so they are not immune, just easier to deal with than forums. And you can more easily find another LLM than a replacement speciality forum.
Moonbirds
Digg
Too comfortable with money in the bank to give full attention to a new venture.
I'm done falling for the Kevin Rose hype train. Long time fan but this is just pathetic.
Topical forums tend to have a much higher SNR. My favorite forum of all time, johnbridge, had none of those issues. Sadly it died this year all the same, but many others still exist. When you have a forum dedicated to something that requires a minimum barrier to entry, the more useless folks get shunned away pretty early and easily.
The vast majority of people do not want to get on a forum to escape their life to see every more or worse content about their daily lives.
You're right, there needs to be some outlet but when people propose this it's because they are sick and tired of politics and the injection of them into everthing is not helping those politics, it just makes it worse.
Tons of people aren't political creatures and want nothing to do with politicians. This notion that more politics will fix thing isn't born out by Reddit, X, the US Congress, Brexit, etc. It's too easy to divide and manipulate people.
I know they claim to care about the bot problem, but they appear at absolute best incredibly complacent about it, if not complicit. All those OnlyFans spammers, AI spam bots, etc. are engagement. They are ruining the platform for people, but engagement figures don’t distinguish between fake engagement and real people. The outcome of their current behaviour is for engagement to steadily rise while the value to real people steadily falls. It’s like they want to be the poster child for Dead Internet Theory.
https://www.businessinsider.com/reddit-ceo-platform-most-hum...
I'm on plenty of niche interest boards built on PHPbb, Xenforo and Discourse. Chronologically ordered discussions, RSS support, no algorithmic "For You" bullshit.
Build it and they will come.
> We know how frustrating this is, and we hope you'll give us another look once we have something to show, we’ll save your usernames!
I think it's partly human. But ex:
> Network effects aren't just a moat, they're a wall.
isn't a natural sentence.
No you can't visit.
Also, honestly, with AI/LLMs now, do we even need human moderators anywhere anymore
It's makes a great propaganda machine though, given humans have a tendency to measure their own opinions on social clues.
In this setup having users elect the moderator leads to cases where small groups create their special interest group and then some trolls challenge the moderator.
Their may be some oversight on the large sub forum, but not all.
Perhaps not the worst thing in the world?
Verifiable credentials; services can get persistent pseudonymous identifiers that are linked to a real-world identity. Ban them once and they stay banned. It doesn’t matter if a person lets a bot post inauthentic content using their identity if, when they are caught, that person cannot simply register a new account. This solves a bunch of problems – online abuse, spam, bots, etc. – without telling websites who you are or governments what you do.
I know this is going to sound horrible, but : how about asking money to contribute, period ? Maybe have a free tier of a couple comments, etc... But if you want to build a troll factory, sure... Show us the cash ?
You just published good content knowing AI will slurp it up and not give you any traffic in return. I'm now replying to you with more content with the same expectations about AI and traffic. Why care about AI or traffic or recognition? Isn't the content the thing that matters?
It's like answering technical questions in an anonymous/pseudonymous chat or forum, which I'm sure you've done, too. We do it to help others. If an AI can take my answer and spread it around without paying me or mentioning one of my random usernames I change every month or so, I would be happy. And if the AI gives me credit like "coffeecup543 originally posted that on IRC channel X 5 years ago", I couldn't care less. It would be noise to the reader. Even if the AI uses my real name, so what?
The people who cared about traffic and money from their posts rarely made good content, anyway. Listicles and affiliate marketing BS and SEO optimizations and making a video that could be 1 minute into 10 minutes, or text that could've been 5 articles into a long book - all existed from before AI. With AI I actually get less of this crap - either skip it or condense it.
Two months, according to The Verge.
https://www.theverge.com/tech/894803/digg-beta-shutdown-layo...
This is particularly embarrassing since from what I recall they were all in on AI with the new website, so to shut it down so fast because of it…
(context so people don't have to click links)
Damn, that didn't take long at all...
They could at least put it in read-only mode for a short time and allow downloading of extant community content prior to a scheduled "reset day".
This smacks of flailing leadership and zero respect for their target user demographic.
Next time try doing it in a way that you control it.
There are subreddits within Reddit such as https://www.reddit.com/r/neutralnews/ that have strict rules around sourcing, etc. However, I think that’s not what most users want, and may not be quite what you’re looking for either, apologies.
- Users don't have to pay to post links/stories - Users have to pay to comment on links/stories - Users have to pay to "upvote" comments. Downvotes don't exist - Each link "lives" a certain amount of time before it is locked. - After lock time, users who posted the link get "paid" a % of the collected $ comments/upvotes. Comments that are upvoted also earn $ proportionally to the upvotes.
Hashcash was conceived to solve automated spam/email. Participating in a discussion must cost something, that's the only way bots and spam will get partially stopped. Or, if they start to optimize to get "the most votes", then so be it, their content will increase in quality.
We're not giving up. Digg isn't going away.
A small but determined team is stepping up to rebuild with a completely reimagined angle of attack. Positioning Digg as simply an alternative to incumbents wasn't imaginative enough. That's a race we were never going to win. What comes next needs to be genuinely different.
We're also announcing something we're excited about: Kevin Rose, Digg's founder who started the company back in 2004, is returning to join the team full-time. Starting the first week of April, Kevin will be putting his focus back on the company he built twenty+ years ago. He'll continue as an advisor to True Ventures, but Digg will be his primary focus. We couldn't think of a better person to help figure out what Digg needs to become.
Lastly, Diggnation, our official Digg podcast, will continue recording monthly while we work on the re-reboot.
Which would be totally fine with me TBH.
Rather amusingly, invite-only torrent sites might be the only semi-public authentically human hangouts left on the internet!
This means that only sites which verify identity will have any value in the future. And by verified, that means against government ID and verified as real.
No amount of sign up fee works as an alternative.
Note that a site can verify identity, prevent sock puppets, ban bad actors and prevent re-registration, all while keeping that ID private.
You still get a handle and publicly facing nick if you want it.
The company which handles this correctly will have a big B after it. Digg actually has a chance at this.
It has no users, so the outrage won't exist in the same capacity. Existing platforms will be pummeled in the market if they try to convert to this type of site, as their DAU will likely drop a thousandfold, just due to the eliminated bots.
But Digg could relaunch this way. And as exhibited, this is now the only way.
The age of the anonymous internet is over, it's done. People not realizing this are living in the past.
Note, I don't like this, but acknowledging reality is vital. Issues with leaked databases, users, hacking of Pii are all technical and legislative issues, and not relevant to whether or not this happens.
Because it will happen, and is happening.
It should be noted that falsifying ID is a crime. Fake ID coupled with computer fraud laws will eventually result in hefty jail time. This is sensible, if people want a world where ecommerce, and discourse is online... and the general public does.
And has exhibited a complete lack of care about privacy regardless.
Twitter is full of blue checks that are just bots and automated reply guys.
I'm treating now all these bots as a stressor on our defense systems, and we will end up having to learn how to build a real Web of Trust, and really up our game on the PKI side. We also need some good Zero Knowledge proof of humanity that people can tie to their Keyoxide profile, so that we can just filter out any message that is not provably associated with a human.
In the most simple sense - Yes, it is the content that matters.
In the more practical sense - cognitive and emotional resources are limited and our brains are not content agnostic.
We have different behaviors, expectations and capacities for talking to machines and talking to humans.
For example, if I am engaging with a human I can expect to potentially change their minds.
For a machine? Why bother even responding. It’s of no utility to me to respond.
Furthermore, all human communication comes with a human emotional context. There are vast amounts of information implied through tone, through what we choose not to say. Sometimes people say things in one emotional state that is not what they would say on another occasion.
To move the conversation forward, addressing the emotional payload behind the words used, matters more than the words used themselves.
There are a myriad reasons why humans are practically poorer for these tools.
My main point wasn't that, though. It's simply a bad and low-effort way to handle the situation, and like one of the other replies points out, there are better options. They could have just as well disabled posting and maybe even viewing of submissions and communities for the time being. Just shutting it all down immediately without notice leaves a bad taste in my mouth, and I will not be among the people returning for their next relaunch. I am sure others feel the same way, and I don't think it is a wise decision to needlessly put off your early adopters if you're hoping for them to come back "next time".
I can see why the team got overwhelmed. I wouldn't want to have to deal with that.
> This isn't just a Digg problem. It's an internet problem. But it hit us harder because trust is the product.
The statement this is making is presumably the crux of the problem (Digg cannot survive without trust!) but it's worded so poorly that it's hard to imagine someone sat down and figured these three sentences were the best way to make the point.
(Where do you think AI picked up its writing habits from?)
Yet people act like the internet is somehow different. The internet is a massive society. Social networks are very much like virtual countries, or even continents. We’ve all enjoyed the benefits of living in this society of zero consequence, but it’s now been overrun by the very worst people, just like the imaginary country above.
You claim we can’t solve this problem, but we already have solved it here in the physical world with identities, laws, and consequences. The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity. Unfortunately, there won’t be a choice for much longer. The internet will certainly be dead without a system that ties IP addresses and online identities to real people.
No, it’s not the internet we all wanted, but humanity has ruined the one we have.
You can already see it happening now - at least the bots that write like vanilla Claude/ChatGPT. Presumably there is a much larger hidden cohort of bots that are instructed to talk more naturally and thus are better adept at flying under the radar…
Their plan is to make the internet what is was 22 years ago.
Example: https://0x0.st/8RmU.png
In the same way people want to be fit.
There are 3 horsemen of Internet forums, one of them is topics with a low barrier to entry.
At that point anyone can speak up, and their opinion takes up as much screen real estate and reading time (often less reading time) than a truly informed take.
By putting effort barriers in place, it forces a fitness test that most users (and bots) fail.
Another subreddit which has strong rules is r/badeconomics. I didn’t know about neutralnews, so thank you for giving me another example to add to the list.
If this were to exist today, I know I would be incredibly critical of it.
Could it be generated? Sure. But there aren't the obvious tells you act like there are.
Google is famous for having almost solely automated support, at it absolutely sucks at doing almost anything. AI only moderation would go the same way.
Edit: to be clear, I'm more concerned about how russia was basically banned from the site but worldnews itself seems like the primary fountain of western astroturfing on the internet. No matter your opinion of putin, that is extremely unhealthy for productive discourse. I don't care about american domestic politics.
For new sites, this meant that the bulk of moderation was done by employees, followed by employee-appointed temporary moderators. This dramatically reduced abuse, but also reduced the explosion of new sub-communities that sites like Reddit thrived on.
You'd have to weight votes by some kind of participation metric to solve the problem of very little authentication of the voters
Bots get so good that they become indistinguishable from humans. If that’s true then it doesn’t actually matter if your community is all bots. But it does matter because authenticity matters to humans. They will seek authenticity where they can successfully sense it, which will be in-person.
Human simulacrums will one day cause a repeat of this issue. Then we’ll have a whole Blade Runner 2049 issue about what exactly is authenticity?
Definitely not. “Terminally online” is as deleterious as it sounds.
So, you have other folks on here already saying that the code their bots write is better than their own, right?
How long until someone who is karma focused just uses a bot to write their comments and post their threads? I mean, it's probably already happening, right? Just like a bot doing your homework for you, but with somehow even less stakes. I imagine that non native speakers will take their posts and go to an AI to help clean them up, at the very least. At the worst, I can imagine a person having a bot interact fully under their name.
So even if we have some draconian system of verification, we will still have some non-zero percentage of bot spam. My out-of-my-butt guess is somewhere near 40%.
Even so, I implemented this and I wrote about it here: https://blog.picheta.me/post/the-future-of-social-media-is-h...
Also for me problem is not in the anonymity itself, but in the lack of reputation. If I have a signal that entity can be trusted, I don't care much about its real identity.
Fact of the matter is, most posts on the internet are already dogshit. Now they're also populated by AI, but the point stands. Most of what you will say online is at best useless.
I'm sure it's impossible, but what if it's not?
"Creator", on the other hand, is beautiful. It means you don't have to pick a lane. Anything can be creative. Documentary filmmaking, stop motion, dance, costume work, historical reenactment, indie animation, economics essays, game dev...
The problem is we don't have a nice word that holistically captures the output of creators. They're not all making films or illustrations. So what do you call it? "Art" is awkward.
"Content" works, but it sounds like slop. We need a better alternative word that elevates creative output.
People will prefer the bots that give them head pats and tell them they're so smart and that they love them
Plus, if you wanted to implement a filtering system for users, I personally would rather trust reviews / comments from credit scores over 650, they have less incentive to be astroturfing.
But yes, I think your conclusion is correct. This is the only way.
Imagine A system where there's a vending machine outside City Hall, you spend $X on a charity for choice, and you get a one-time, anonymous token. You can "spend" it with a forum to indicate "this is probably a person or close enough to it."
Misuse of the system could be curbed by making it so that the status of a token cannot be tested non-destructively.
You can barely comment before you are rate limited.
You can’t upvote until you’ve been around a pretty long time.
New accounts are given a green badge of dishonor that makes users scrutinize their comments more.
I’m not saying these are bad things but they’re probably too restrictive for a social media network that’s just meant to be a good fun time.
The bots are not really that bad, they're (still) pretty easy to spot and not engage with. I'm more perplexed about the negativity filled comments sections, and I'm pretty sure most posters are real grass-fed certified humans.
I don't get why negative posts get so upvoted, get so popular on the front page, and people still debate with outdated arguments in them. People come in and fight other deamons, make straw-man arguments and in general promote negative stuff like there's no tomorrow. I think you can get so much more signal from posititve examples, from "hey I did a thing" type posts, and so on. Even overhyped stuff like the claw-mania can still be useful. Yet the "I did a thing" get so overwhelmed by negativity, nitpicking and "haha not perfect means doa" type of messages. That makes me want to participate less...
The only sustained business I'm aware of is Hodinkee.
I use mander.xyz, it's science focused, but they also have a policy of only de-federating instances that host CSAM.
If you're telling me it's _worse_ than reddit in this regard, I can only imagine how terrible it is.
Especially considering the fact that it seems more the case that the bigger stop-gap is what we already have:
In asian (especially Japan) it's host(ess) clubs.
Globally for friends it's influencers exploiting loneliness.
Those are things I think has to go for people to embrace offline socialization or using their online time better.
How do you figure? If these bots are driven by commercial interests that seems an unlikely outcome.
From what I can tell Watchville was abandoned a few years ago.
https://aaltodoc.aalto.fi/server/api/core/bitstreams/4176474...
Every election I see internet-connected gym machines have the leaderboards spammed with right wing messages because some people don’t have to work and just spin all day.
And yes, ditch them. Even well over a decade ago, Wikipedia of all places already employed IP address matching to link sockpuppet accounts. You must be extremely careful of never using any device that was associated with your old accounts on the same network as the devices associated with your new account. And that includes devices only seen by association.
The vast majority of sub forums however are more targeted and smaller to begin with.
If it were YouTube, "YouTuber" is a start, but you could also be a "YouTube science communicator" or something
Sending an unsolicited email to a random person X requires you to pay a small toll (something like 50p).
Subsequent emails can then be sent for free - however person X can “revoke” your access any time necessitating a further toll payment.
You would of course be able to pre-authorise friends/family/transactional emails from various services that you’ve signed up for.
This would nuke spam economics and be minimally disruptive for other use cases of email IMO…
I got encouraged by another HN poster a few days ago, let me know if you have any suggestions.
I’m always open to criticism.
I think communities like Reddit and Digg grow to a certain point and don’t grow anymore because everybody else absolutely hates what those communities have become. See the fight years ago where Digg thought it had to outgrow MrBabyMan. Problem is platforms don’t usually win those fights.
Sure, today’s redditors love sharing stupid image memes. For each of them there are 20 people who wouldn’t touch Reddit with a 10-foot pole.
"We underestimated the gravitational pull of existing platforms. Network effects aren't just a moat, they're a wall."
It's a mixed metaphor which doesn't make any sense. There are really very few ways in which this can be considered good writing - I guess the grammar is ok even if it is nonsense.
So let's break it down - underestimated the gravitational effects - ok, this is nice, like where it's going talking about these big competitors sucking in users, but then we have the metaphor extended to breaking point:
Network effects are a moat, but not just a moat, they're a wall (which is really not anything like a moat). So which of these 3 things are they, and why are we mixing the metaphors of gravity (pulling in customers), moats (competitive moat) and walls (walled gardens).
It's just all a bit nonsensical and the kind of fuzzy prose that seems superficially impressive without actually saying anything meaningful in which LLMs excel. Go try generating an article from just the heads in this article, and see how similarly it reads.
You've identified a problem that unrelated systems also have. Like banks and identity theft. This solution isn't responsible for causing that problem.
"How will the AI be detected? By another AI?"
However a platform likes to. Let the best platform win.
> Wikipedia of all places already employed IP address matching to link sockpuppet accounts
That’s… well, that’s just not how tcp/ip works. Your phone number has nothing to do with your device IP…
But what do you call their output?
What do you call an illustrator's output? A photographer? What about when all of that shows up on your feed collectively?
Content is a gross word.
These are one of the main culprits of unwanted emails... and a toll system would make them all the more valuable for the even worse actors to take advantage of.
You get the right to down vote and if I promote my totally not a scam product on HN, people will check my user account and see: on wow over 9000 karma? Gotta be trust worthy, when in truth it's just been karma farming.
HN does limit some of it, but it's not a panacea.
The point being made is that communities maintain high signal to noise ratios by adding effort filters.
Compare to the canonical example from Cyrano de Bergerac: ''Tis a rock! ... a peak! ... a cape! -- A cape, forsooth! 'Tis a peninsular!'
Also werent all "moats" commonly paired with a wall in real life? As in a moat around a castle wall?
Even HN is only quasi-free speech, there are rules that will get one censored.
If you love freedom, there are mailing lists and other platforms but they arnt as high on dopamine and the audience gets a little bit more sketch.
> Failed sending verification e-mail to XXX@XXmail.XXX, please contact administrator on stonky@stonkys.com
Their /instances page also only shows a single blocked instance, whereas something like programming.dev shows lots of questionable instances blocked.
Somehow we jut gave business owners more freedoms than we gave everyone else....
"Am I making a post which is either funny, informative, or interesting on any level?
I hate how Reddit mods ban any post they don't like as being 'low effort / shit / spam' when it is completely vague.
I don't care so much about Digg, but the endless "haha, I caught you!" comments annoy me more than the rare actual AI-written content they label.
In business metaphors no they are used for different things and also when you create a metaphor you should stick with it, that’s what makes this jarring and weird.
If they wanted to keep it to a single sentence, they could have used a a word like "rather" to act as a separator between moat and wall.
I have to strongly disagree with you on this. It behooves us (as a species) not to degrade our own manner of speaking and writing simply because of a (possibly temporary) technical anomaly.
In my view, it would be really, really sad to lose expressive punctuation or ways of constructing sentences simply because they're overused by AI.
I, for one, won't be a part of that, and I hope you won't, either.