Give people the ability to submit a “Show HN” one year in advance. Specifically, the user specifies the title and a short summary, then has to wait at least year until they can write the remaining description and submit the post. The user can wait more than a year or not submit at all; the delay (and specifying the title/summary beforehand) is so that only projects that have been worked on for over a year are submit-able.
Alternatively, this can be a special category of “Show HN” instead of replacing the main thing.
Show HN: My Project - A description for my vibe coded project [3 weeks]
A lot of the good stuff I see on Show HN are projects that have been worked on for a long time. While I understand that vibe coding is newer trend, I also know that vibe coded projects are less likely to stand the test of time. With this, we don't have to worry about whether a project is AI assisted or not, nor do we ban it. Instead just incentivize longer term projects. If the developer lies about how long they worked on the project, they will get reported and downvoted into oblivion.
Show HN: Clawntown – An Evolving Crustacean Island - https://news.ycombinator.com/item?id=47023255
Where the vibe coders with their slop cannons aren't present though is in things that require hard won domain knowledge. IE, stuff that requires you to actually create a new idea, off an understanding of actual areas of need.
And that kind of thing probably isn't going to do well on Show HN, because your audience probably isn't on HN.
I was a skeptic last year, and now... not so much. I am having Claude build me a distributed system from scratch. I designed it last week as I was admitting to myself the huge failure of my big "I love to code" project that I failed to get traction on.
It took me a week to even give the design to claude because I was afraid of what it meant. I started it last night, and my jaw is dropped. There is a new skill being grown right now, and it... is something.
It certainly isn't nothing, and I for one am curious to simply see what people are making with vibes alone. It's fascinating... and horrifying.
But, I have learned to silence that part of me that is horrified since the world never cared for what I find beautiful (i.e. terrible languages like JavaScript)...
I've launched multiple side projects through Show HN over the years. The ones that got traction weren't better products. They hit the front page during a slow news hour and got enough early upvotes to survive the ranking curve. The ones that flopped were arguably more interesting but landed during a busy cycle. That's not a Show HN problem, that's a single-ranking-pool problem.
What would actually help is a separate ranking pool for Show HN with slower time decay. Let projects sit visible for longer so the community can actually try them before they drop off. pg's original vision was about making things people want. Hard to evaluate that in a 90-minute window.
I took a look at the project and it was a 100k+ LoC vibe-coded repository. The project itself looked good, but it seemed quite excessive in terms of what it was solving. It made me think, I wonder if this exists because it is explicitly needed, or simply because it is so easy for it to exist?
Maybe if people did Show HN for projects that are useful for something? Or at least fun?
There's a disease on HN related with the latest fad:
- (now) "AI" projects
- (now) X but done with "AI"
- (now) X but vibecoded
- (less now, a lot more in the recent past) X but done in Rust
- (none now, quite a few in a more distant past) X but done with blockchain
If the main quality of the project is one of the above, why would it attract interest?
The thing in show HN has to do something to raise interest. If not even the author/marketer thinks it does something, why would anyone look at it?
C'est la vie and que sera. I'm sure the artistic industry is feeling the same. Self expression is the computation of input stimuli, emotional or technical, and transforming that into some output. If an infallible AI can replace all human action, would we still theoretically exist if we're no longer observing our own unique universes?
Trane (good post): https://news.ycombinator.com/item?id=31980069
Pictures Are For Babies (lame post): https://news.ycombinator.com/item?id=45290805
https://news.ycombinator.com/item?id=47026263
I attribute it mostly to my own inability to pitch something that is aimed for many audiences at once and needs more UX polishing and maybe a bit on timing.
It's tough when you're not looking to sell a product but moreso engage in a community without going the twitter/bluesky route (which I'll bregudgingly may start using).
Maybe evals is a problem that people don't have yet because they can just build their custom thing or maybe it needs a "hey, you're building agent skills, here's the mental model" (e.g. https://alexhans.github.io/posts/series/evals/building-agent... ) and once they get to the evals part, we start to interact.
In any case, I still find quite a lot of cool things in SHOW HN but the volume will definitely be a challenge going forward.
It was not just a product launch for me. I was, sort-of in a crisis. I had just turned 40 and had dark thoughts about not being young, creative and energetic anymore. The outlook of competing with 20 year old sloptimists in the job market made me really anxious.
Upon seeing people enjoying my little game, even if it's just a few HNers, I found an "I still got it" feeling that pushed me to release on Steam, to good reviews.
It was never about the money, it was about recovering my self confidence. Thank you HN, I will return the favour and be the guy checking the new products you launch. If Show HN is drowning, i will drown with it.
Raising the quality bar would likely cut down on quantity as a side effect, and that would be a nice solution. One idea that a user proposed is a review queue where experienced HN users would help new Show HN submitters craft their posts to be more interesting and fit HN's conventions more.
One of those comments was genuinely useful feedback from Argentina about localization. That alone made it worth posting. But the post was gone from page 1 in what felt like minutes.
What's interesting is this isn't a weekend vibe-coded project - it involves actual physical production, printing, and shipping. But from the outside it probably looks like "another AI wrapper," which I think is the core problem: the flood of low-effort AI projects has made people reflexively skeptical of anything that mentions generation, even when there's real infrastructure behind it.
It's fair to give the audience a choice to learn about an AI-created product or not.
I did 3 ShowHN in 2024 (outside of the scope of this analysis), one with 306 points, another with 126 points and the third with... 2. There's always been some kind of unpredictability in ShowHN.
But I think the number one criteria for visibility is intelligibility: the project has to be easy to understand immediately, and if possible, easy to install/verify. IMHO, none of the three projects that the author complains didn't get through the noise qualify on this criteria. #2 and #3 are super elaborate (and overly specific); #1 is the easiest to understand (Neohabit) but the home page is heavy in examples that go in all directions, and the github has a million graphics that seem quite complex.
Simplify and thou shall be heard.
Some of it is "I wish things I think are cool got more upvotes". Fare enough, I've seen plenty of things I've found cool not get much attention. That's just the nature of the internet.
The other point is show and share HN stories growing in volume, which makes sense since it's now considerably easier to build things. I don't think that's a bad thing really, although curation makes it more difficult. Now that pure agentic coding has finally arrived IMO, creativity and what to build are significantly more important. They always were but technical ability was often rewarded much more heavily. I guess that sucks for technical people.
The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective.
I feel like this is what AI has done to the programming discussion. It draws in boring people with boring projects who don't have anything interesting to say about programming.
Before, projects were more often carefully human crafted.
But nowadays we expect such projects to be "vibe coded" in a day. And so, we don't have the motivation to invest mental energy in something that we expect to be crap underneath and probably a nice show off without future.
Even if the result is not the best in the world, I think that what interest us is to see the effort.
It's only that you can't claim any of the top shelf prizes by vibe coding
The legend says SHNs are getting worse, but surely if the % of SHN posts with 1 point is going DOWN (as per graph) then it's getting better? Either I am dense or the legends are the wrong way round no?
And the comments should start with the day/month the project was first launched.
You could argue it's dead in the sense of "dead internet theory". Yes, more projects than ever are being submitted, but they were not created by humans. Maybe they are being submitted by humans, for now.
Thank you for making it, and don't give up. Passion and vision > vibe coding sloptimists.
I'm sure a happy medium is shutting off links to vibe coded source code, and only letting vibed hosted applications or websites. For us who want to read code, source code that means nothing to anyone is pretty disappointing for a Show HN.
Once some users have extra power to push content to the front-page, it will be abused. There will be attempts to gain that privilege in order to monetize, profit from or abuse it in some other way.
The only option along this path would probably be to keep the list of such users very tightly controlled and each vouched for individually.
---
Another approach might be to ask random users (above certain karma threshold) rank new submissions. Once in a while stick a showhn post into their front page with up and down arrows, and mark it as a community service. Given HN volume it should be easy to get an average opinion in a matter of minutes.Something rapid fire, fun, categorized maybe. Just a showcase to show off what you've done.
Vibe coding is not helping either, I guess. Now it is even cheaper to create assets for the distribution channel.
I think same thing happened with product hunt.
I think that Show HN should be used sparingly. It feels like collective community abuse of it will lead to people filtering them out mentally, if not deliberately. They're very low signal these days.
It's like books. Old but still relevant books are the best books to read.
This tech industry is changing so fast though. Maybe a year is too much?
> sloptimists
That's a good one! Did you just come up with it? I've never seen it before.Also requiring disclosure of the use of AI in repos and especially (or perhaps specifically discouraging its use) when responding with comments to HN feedback.
I'll take this opportunity to strongly encourage sharing prompts (the newest tier of software source code) as the logical progression of OSS adding additional value to Show HN.
For example, in one project, PRs have to be submitted to the "next" branch and not the default branch. This is written in the CONTRIBUTING.md file, which is linked in the PR template, with the mention that PRs that don't respect that will be close. Most if not all submitters of low-quality PRs don't do anything once their initial PR is closed.
Pretty bummed about that as I just submitted a show HN I'm pretty happy about (it solves an annoying problem I had for years, which I know many people have) and I was looking forward to talk about it (https://news.ycombinator.com/item?id=47050872)
So while I understand that new features on HN are few and far between, a quick validation of "Show HN" posts that says, "I see you are trying to post a Show HN..." with some concise explanation of the guidelines might help. I want to believe that most new users mean well, they just need better explanations.
Show HN [NOAI]:
Since it's too controversial to ban LLM posts, and would be too easy for submitters to omit an [LLM] label... Having an opt in [NOAI] label allows people to highlight their posts, and LLM posts would be easy to flag to disincentivise polluting the label.This wouldn't necessarily need to be a technical change, just an intuitive agreement that posts containing LLM or vibe coded content are not allowed to lie by using the tag, or will be flagged... Then again it could also be used to elevate their rank above other show HN content to give us humanoids some edge if deemed necessary, or a segregated [NOAI] page.
[edit]
The label might need more thought, although "NOAI" is short and intelligible, it might be seen as a bit ironic to have to add a tag containing "AI" into your title. [HUMAN]?
Feb 17, 2026 · Arthur Cnops
A few days ago I posted to Show HN. I had good fun building that useless little internet experience. The post quickly disappeared from Show HN's first page, amongst the rest of the vibecoded pulp. And to be clear, I'm fine with that.
The behavior on Show HN was interesting to see though. So I pulled the data.
Show HN of course isn't dead. You could even say it's more alive than ever. What has changed is the volume of posts and engagement per post. It's only natural when more projects are being built in a single weekend. There's less "Proof of Work".
From the business side of this, Johan Halse recently called this the Sideprocalypse: the end of the small indie developer's dream. Every idea has been built, marketed better, and SEO'd into oblivion by someone with more money.
Some cool projects aren't getting through this noise, which is a pity. Here are a few I thought were interesting:
I just upvoted them!
Now, let's look at some data.
0 1.2k 2.4k 3.6k 4.8k 6.0k 2023-02 2023-08 2024-02 2024-08 2025-02 2025-08 2026-01 4.8k Show HN Volume Feb 2023 – Jan 2026 Show HN
0.0% 4.0% 8.0% 12.0% 16.0% 20.0% 2023-02 2023-08 2024-02 2024-08 2025-02 2025-08 2026-01 15.2% Show HN Share % of all HN stories Show HN
0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 2023-02 2023-08 2024-02 2024-08 2025-02 2025-08 2026-01 37.2% 26.2% 1-Point Posts % stuck at exactly 1 point Show HN Other
Show HN started out better than regular submissions. Now it's significantly worse.
How long does a Show HN post stay on page 1 before being pushed off? During peak hours (US daytime):
0.0h 3.0h 6.0h 9.0h 12.0h 15.0h 2023-02 2023-08 2024-02 2024-08 2025-02 2025-08 2026-01 2.9h Est. Time on Page 1 Est. hours before pushed off (peak) Show HN
0.0 2.0 4.0 6.0 8.0 10.0 2023-02 2023-08 2024-02 2024-08 2025-02 2025-08 2026-01 3.1 Comments per Post Avg comments on Show HN posts Show HN
No. There's just more noise, and less opportunity to get attention and have a discussion with other folks on HN about your project. Some gems go completely unnoticed. Maybe something for HN to think about: how do these subjective "gems" get more spotlight? How does HN remain the coolest place to talk about the coolest tech?
* some people want to show off a fun project/toy/product that they built because it's a business they're trying to start and they want to get marketing
* some people want to show off a fun project/toy/product that they built because it's involves some cool tech under the hood and they want to talk shop
* some people want to show off a fun project/toy/product that they built because it's a fun thing and they just want some people to have fun
Not everything has to revolve around HN.
Even before AI got so strong, some of the translations were fairly abnormal in their own way.
>The post body is supposed to be part of the human connection element!
I really think this is the best too :)
Maybe for the non-English speakers, or anyone really, if a project means a lot, have a number of people who are smart in different ways look over the text a number of times and help you edit beforehand.
To make sure it's what you the human want to really say at the time.
That would be the pg way.
I am wary of blogs by celebrity software managers such as DHH, Jeff Atwood, Joel Spolsky, and Paul Graham because they talk as if there was something about their experience in software development and marketing except... there isn't.
The same is true for the slop posts about "How I vibe coded X", "How I deal with my anxiety about Y" and "Should I develop my own agentic workflow to do Z?" These aren't really interesting because there isn't anything I can take away from them -- doomscrolling X you might do better because little aphorisms like "Once your agent starts going in circles and you find yourself arguing it you should start a new conversation" is much more valuable than "evaluations" of agents where you didn't run enough prompts to keep statistics or a log of a very path-dependent experience you had. At least those celebrity managers developed a product that worked and managed to sell it, the average vibe coder thinks it is sufficient that it almost worked.
And yes, disclosing the use of AI should be par for the course.
Most people did not read the post, which was immediately evident from how they posted their application by copy-pasting and editing an application posted by someone else before them.
Few things in life are as reliable and trustworthy as the laziness of others.
From their perspective, HN is another place to post and get views on their project, part of a check list for their "launch" or whatever, not everything comes from within the ecosystem.
Some posts their projects then never reply to any of the comments, while for me (and many others I bet) half the reason of posting a Show HN is because I'm looking for participating in discussions about my thing and understanding different perspectives thinking about it too.
> I want to believe that most new users mean well, they just need better explanations.
Yeah, so far the only thing I know of is the "Please read the Show HN rules and tips before posting" blurb on the /show list, and the separate pages. Maybe some interstitial or similar if the title prefix-matches with "Show HN" could display the rules, guidelines and "netiquette" more prominently and get more people to be aware of it.
Feels like effort needs to be the barrier (which unfortunately needs human review), not "AI or not". In lieu of that, 100 karma or account minimum age to post something as Show HN might be a dumb way to do it (to give you enough time to have read other people's so you understand the vibe).
e.g. [20h/2d/$10] could indicate "I spent 20 human-hours over 2 days and burned $10 worth of tokens" (it's hard to put a single-dimensional number on LLM usage and not everyone keeps track, but dollars seem like a reasonable approximation)
I'm wondering how much of it is portfolio building to keep or find a new job in a post-Ai coding world
This really bothers me, coming here asking for human feedback (basically: strangers spending time on their behalf) then dumping it into the slop generator pretending it is even slightly appreciated. It wouldn't even be that much more work to prompt the LLM to hide its tone (https://news.ycombinator.com/item?id=46393992#46396486) but even that is too much.
> author (pilot?) hasn't generally thought too much about the problem space
I’ve stopped saying that “AI is just a tool” to justify/defend its use precisely because of this loss of thought you highlight. I now believe the appropriate analogy is “AI is delegation”.
So talking to the vibe coder that’s used AI is like talking to a high level manager rather than the engineer for human written code
It's taken me about month; currently at ~500 commits. I've been obsessed with this problem for ~6 weeks and have made an enormous amount of progress, but admittedly I'm not an expert in the domain.
Being intentionally vague, because I don't want to tip my hand until it's ready. The problem is related to an existing open source tool in a particular scientific niche which flatly does not work on an important modern platform. My project, an open source repo, brings this important legacy tool to this modern platform and also offers a highly engaging visual demo that is of general interest, even to a layperson not interested in programming or this particular scientific niche.
I genuinely believe I have something valuable to offer to this niche scientific community, but also as a general interest and curiosity to HN for the programming aspects (I put a lot of thought into the architecture) as well as the visual aspects (I put a lot of thought into the design and aesthetics).
Do you have any advice on how to present this work in a compelling way to people who understandably feels as burned out on AI slop as you do?
> The post quickly disappeared from Show HN's first page, amongst the rest of the vibecoded pulp.
The linked article[0] also talks at length about the impact of AI and vibe-coding on indie craftsmanship's longevity.
[0] - https://johan.hal.se/wrote/2026/02/03/the-sideprocalypse/
The difference now is that there is even less correlation between "good readme" and "thoughtful project".
I think that if your goal is to signal credentials/effort in 2026 (which is not everyone's goal), a better approach is to write about your motivations and process rather than the artefact itself - tell a story.
Meaning you would have to demonstrate that you had or were willing to contribute to the HN community before just promoting your own stuff.
- Children's books, at least the well-reviewed ones, are pretty good
- This is AI generated, so I expect the quality to be significantly lower than a children's book. Flipping through the examples, I am not convinced that this will be higher quality than a children's book.
- At 20 euros for a paperback, this is also more expensive than most children's books
- Your value prop, as I take it, is that your product is better because it is a book generated for just one child, but I am not convinced that's a solid value prop. I mean, it is kind of an interesting gimmick, but the book being fully AI generated is a large negative, and the book being uniquely created for my kid is a relatively smaller positive.
Those are definitely the highiest-order bits you need to prove to me in order to get traction. A couple of smaller things you should fix as well:
- As an English speaker, almost all the examples are not in English. You should take a reasonable guess at my language and then show me examples in my language
- It's difficult to get started: "Create your own book" leads to a signup page and I don't want to go through that friction when I am already skeptical
If I used LLMs to generate a few functions would I be eligible for it? What constitutes "built this with no/ minimal AI"?
Maybe we should have a separate section for 80%+ vibe coded / agent developed.
So in future everything’s gonna be “agentic”, (un)fortunately.
Everytime I write about it, I feel like a doomsayer.
Anthropic admits that LLM use makes brain lazy.
So as we forgot remembering phone numbers after Google and mobile phones came, it will be probably with coding/programming.
Also its not uncommon for weekend projects to be done in a shprt span with just a "first commit" message dump even pre-AI.
HN has a very different personality at weekends versus weekdays. I tend to find most of the stuff I think is cool or interesting gets attention at the weekends, and you'll see slightly more off the wall content and ideas being discussed, whereas the weekdays are notably more "serious business" in tone. Both, I think, have value.
So I wonder if there's maybe a strong element of picking your moment with Show HN posts in order to gain better visibility through the masses of other submissions.
Or maybe - but I think this goes against the culture a bit - Show HN could be its own category at the top. Or we could have particular days of the week/month where, perhaps by convention rather than enforcement, Show HN posts get more attention.
I'm not sure how workable these thoughts are but it's perhaps worth considering ways that Show HN could get a bit more of the spotlight without turning it into something that's endlessly gamed by purveyors of AI slop and other bottom-feeding content.
One of the great drawbacks of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge.
It used to be that ShowHN was a filter: in order to show stuff, you had to have done work. And if you did the work, you probably thought about the problem, at the very least the problem was real enough to make solving it worthwhile.
Now there's no such filter function, so projects are built whether or not they're good ideas, by people who don't know very much
— Tom Cargill, Bell Labs
Some day I’m going to get a crystal ball for statistics. Getting bored with a project was always a thing— after the first push, I don’t encounter like 80% of my coding side projects until I’m cleaning— but I’ll bet the abandonment rate for side projects has skyrocketed. I think a lot of what we’re seeing are projects that were easy enough to reach MVP before encountering the final 90% of coding time, which AI is a lot less useful for.
How many non-native English speakers are on HN? If it's more than 30%, why should they have to use a whole new language if they can just let an LLM do it in a natural sounding way.
As dang posted above, I think it's better to frame the problem as "influx of low quality posts" rather than framing policies having to do explicitly with AI. I'm not sure I even know what "AI" is anymore.
As per the old efficient market jokes: https://news.ycombinator.com/item?id=28029044
so, in the past, i've created throwaway HN accounts for sharing things that connect to my real ID.
You're right that children's books can be excellent, and for generic topics a well-reviewed book from a skilled author and illustrator will beat what we generate. No argument there.
Where we see real value is in the gaps the publishing industry doesn't serve. Bilingual families who can't find books in Maltese/English or Estonian/German. A child with an insulin pump who wants to see a superhero like them. A kid processing their parents' divorce. A child with two dads, or being adopted, or starting at a new school in a country where they don't speak the language yet. No publisher will print a run of one for these families - but these are exactly the stories that matter most to them.
On the UX points - you're right on both. We should localize the showcase to your language, and the signup wall before trying is too much friction. Working on both.
One is where the human has a complete mental map of the product, and even if they use some code generating tools, they fully take responsibility for the related matters.
And there is another, emerging category, where developers don't have a full mental map as it was created by an LLM, and no one actually understands how it works and what does not.
I believe these are two categories that are currently merged in one Show HN, and if in the first category I can be curious about the decisions people made and the solutions they chose, I don't give a flying fork about what an LLM generated.
If you have a 'fog of war' in your codebase, well, you don't own your software, and there's no need to show it as yours. Same way, if you had used autocomplete, or a typewriter in the time of handwriting, and the thinking is yours, an LLM shouldn't be a problem.
Case in point: aside from Tabbing furiously, I use the Ask feature to ask vague questions that would take my coworkers time they don't have.
Interestingly at least in Cursor, Intellisense seems to be dumbed down in favour of AI, so when I look at a commit, it typically has double digit percentage of "AI co-authorship", even though most of the time it's the result of using Tab and Intellisense would have given the same suggestion anyway.
These days I guess we don't want a library? I can create an MIT-licensed repo with some charts you can point your AI agent to, if it helps?
The font is Gaegu.
Post both versions
I don't think we need to wait a generation either. This probably was a part of their personality already, but a group of people developers on my job seems to have just given up on thinking hard/thinking through difficult problems, its insane to witness.
Thing is I worked manually on both of these a lot before I even touched Claude on them so I basically was able to hit my wishlist items that I don't have time to deal with these days but have the logic figured out already.
I have two projects right now on the threshold of "Show HN" that I used AI for but could have completed without AI. I'm never going to say "I did this with AI". For instance there is this HR monitor demo
https://gen5.info/demo/biofeedback/
which needs tuning up for mobile (so I can do an in-person demo to people who work on HRV) but most all being able to run with pre-recorded data so that people who don't have a BTLE HR monitor can see how cool it is.
Another thing I am tuning up for "never saw anything like this" impact is a system of tokens that I give people when I go out as-a-foxographer
https://mastodon.social/@UP8/116086491667959840
I am used to marketing funnels having 5% effectiveness and it blows my mind that at least 75% of the tokens I give out get scanned and that is with the old conventional cards that have the same back side. The number + suit tokens are particularly good as a "self-working demo" because it is easy to talk about them, when somebody flags me down because they noticed my hood I can show them a few cards that are all different and let them choose one or say "Look, you got the 9 of Bees!"
Side note: I’d think installing Anubis over your work would go a long way to signaling that but ymmv.
I don't particularly care if people question that, but the source repo is on GitHub: they can see all the edits that were made along the way. Most LLMs wouldn't deliberately add a million spelling or grammar mistakes to fake a human being... yet.
As for knowing what I'm talking about. Many of my blog posts are about stuff that I just learned, so I have many disclaimers that the reader should take everything with a grain of salt. :-) That said: I put a ridiculous amount of time in these things to make sure it's correct. Knowing that your stuff will be out there for others to criticize if a great motivator to do your homework.
These days I do see a lot of people choosing software for the money. Notably, many of them are bootcamp graduates and arguably made a pivot later in life, as opposed to other careers (such as medicine) which get chosen early. Nothing wrong with that (for many it has a good ROI), but I don’t think this changed anything about people with technical hobbies.
When you’re young, you tend not to choose the path the rest of your life will take based on income. What your parents want for you is a different matter…
I see no reason to disrespect your work from what you say, but I also see no reason that AI would be much help to you after you had been learning for a year. If you are in the loop, shouldn't this be just about the moment when your growing abilities start to easily outpace the model's fixed abilities?
If you do then not vibe coded.
For me, I have different levels of vibes:
Some testing/prototyping bash scripts 100% vibe coded. I have never actually read the code.
Sometimes early iterations, I am familiar with general architecture, but do not know exact file contents.
Sometimes I have gone through and practically rewrote a component from scratch either because it was too convoluted, did not have the perfect abstraction I wantet/etc.
For me the third category is not vibe coded. The first 2 are tech debt in the making.
Good question, and by the same "token" when does it start?
Maybe if there's no possible way the creator could have written it by hand, perhaps due to almost complete illiteracy to code in any language, or something like that, it would be a reference point for "pure vibe". If the project is impressive, that's still nothing to be ashamed of. Especially if people can see the source code.
All kinds of creative people I see are mostly no dummies and it might be better than nothing for them to honestly rate their own submissions somewhere on the scale from pure vibe to pure manual?
With no stigma regardless, and let the upvotes or downvotes from there give an indication of how accurate the self-assessments are. Voting directly to Show HN could even have a different "currency" [0] to help regulate the fall of Show submissions, where a single upvote could mean something like infinitely more than zero.
I'm not disappointed by a project purely vibed by somebody like a visual artist, storyteller, or business enthusiast who has never written a line of code, as long as it is astoundingly impressive, in the league of the better projects, those I would like to take a look at.
I also see real accomplished coders guide their agents to arrive at things that wouldn't be as nice if they didn't have years of advanced manual ability beforehand.
Plus I think I'm in the vast majority and have no interest in "slop", in a way that aligns with so many kinds of people who are also turned off.
But so far, the best definition we have for slop is "we know it when we see it".
Oh, well that's all I've got, so far :)
[0] slop vs non-slop which is like pass/fail, or even a numerical rating could be on the "ballot".
So either we are going to completely avoid automation and create a community council to decide what deserves to be shown to rest of the community or just let best AI models to decide if a project is worth show up on front page?
Or we can do all of the above :)
Chasing clout through these forums is ill advised. I think people should post, sure. But don't read into the response too much. People don't really care. From my experience, even if you get an insanely good response, it's short lived, people think its cool. For me it never resulted in any conversions or continued use. It's cheap to upvote. I found the only way to build interest in your product is organic, 1 on 1 communication, real engagement in user forums, etc.
Let's see, how to say this less inflamatory..
(just did this) I sit here in a hotel and I wondered if I could do some fancy video processing on the video feed from my laptop to turn it into a wildlife cam to capture the birds who keep flying by.
I ask Codex to whip something up. I iterate a few times, I ask why processing is slow, it suggests a DNN. I tell it to go ahead and add GPU support while its at it.
In a short period of time, I have an app that is processing video, doing all of the detection, applying the correct models, and works.
It's impressive _to me_ but it's not lost on me that all of the hard parts were done by someone else. Someone wrote the video library, someone wrote the easy python video parsers, someone trained and supplied the neural networks, someone did the hard work of writing a CUDA/GPU support library that 'just works'.
I get to slap this all together.
In some ways, that's the essence of software engineering. Building on the infinite layers of abstractions built by others.
In other ways, it doesn't feel earned. It feels hollow in some way and demoing or sharing that code feels equally hollow. "Look at this thing that I had AI copy-paste together!"
Wait, what? That's a great benefit?
Let’s be honest, this was always the case. The difference now is that nobody cares about the implementation, as all side projects are assumed to be vibecoded.
So when execution is becoming easier, it’s the ideas that matter more…
There is this real disconnect between what the visible level of effort implies you've done, and what you actually have to do.
It's going to be interesting to see how our filters get rewired for this visually-impressive-but-otherwise-slop abundance.
My experience is the opposite. It’s so much easier to have an LLM grind the last mile annoyances (e.g. installing and debugging compilation bullshit on a specific raspberry pi + unmaintained 3p library versions.)
I can focus on the parts I love, including writing them all by hand, and push the “this isn’t fun, I’d rather do something else” bits to a minion.
I’ve seen variation of this question since first few weeks /months after the release of ChatGPT and I havent seen an answer to this from leading figures in the AI coding space, whats the general answer or point of view on this?
Long-term, this is will do enormous damage to society and our species.
The solution is that you declare war and attack the enemy with a stream of slop training data ("poison"). You inject vast quantities of high-quality poison (inexpensive to generate but expensive to detect) into the intakes of the enemy engine.
LLMs are highly susceptible to poisoning attacks. This is their "Achilles' heel". See: https://www.anthropic.com/research/small-samples-poison
We create poisoned git repos on every hosting platform. Every day we feed two gigabytes of poison to web crawlers via dozens of proxy sites. Our goal is a terabyte per day by the end of this year. We fill the corners of social media with poison snippets.
There is strong, widespread support for this hostile posture toward AI. For example, see: https://www.reddit.com/r/hacking/comments/1r55wvg/poison_fou...
Join us. The war has begun.
It seems silly, but I know I'm more likely to review an implementation if can learn more about the author's state of mind by their style.
presumably if this is true, it should be obvious by the quality of your product. If it isnt, then maybe you need to need to rethink the value of your artisanal hand written code.
I work with a large number of programmers who don't use AI and don't have an accurate mental map for the codebases they work in...
I don't think AI will make these folks more destructive. If anything, it will improve their contributions because AI will be better at understanding the codebase than them.
Good programmers will use AI like a tool. Bad programmers will use AI in lieu of understanding what's going on. It's a win in both cases.
Are the tokens to write out design documentation and lots of comments too expensive or something? I’m trying to figure out how an LLM will even understand what they wrote when they come back to it, let alone a human.
You have to reify mental maps if you have LLM do significant amounts of coding, there really isn’t any other option here.
I suspect automating "code base over time" metric is tricky. Not everyone will be using git or a vcs and somethings dont need a codebase to be shared.
And for something that is, as you said, impressive to you, that's fine! But the spirit of Show HN is that there was some friction involved, some learning process that you went through, that resulted in the GitHub link at the top.
I saw this come out because my boss linked it as a faster chart lib. It is ai slop but people loved it. [https://news.ycombinator.com/item?id=46706528]
I knew i could do better so i made a version that is about 15kb and solves a fundamental issue with web gl context limits while being significantly faster.
AI helped do alot of code esp around the compute shaders. However, i had the idea of how to solve the context limits. I also pushed past several perf bottlenecks that were from my fundamental lack of webgpu knowledge and in the process deepened my understanding of it. Pushing the bundle size down also stretched my understanding of js build ecosystems and why web workers still are not more common (special bundler setting for workers breaks often)
Btw my version is on npm/github as chartai. You tell me if that is ai slop. I dont think it is but i could be wrong
"Oh, this library just released a new major version? What a pity, I used to know v n deeply, but v n+1 has this nifty feature that I like"
It happened all the time even as a solo dev. In teams, it's the rule, not the exception.
Vibing is just a different obfuscation here.
It's a bit parallel to that thing we had in 2023 where dinguses went into every thread and proudly announced what ChatGPT had to say about the subject. Consensus eventually become that this was annoying and unhelpful.
In the past, new modders would often contribute to existing mods to get their feet wet and quite often they'd turn into maintainers when the original authors burnt out.
But vibe coders never do this. They basically unilaterally just take existing mods' source code, feed this into their LLM of choice and generate a derivative work. They don't contribute back anything, because they don't even try to understand what they are doing.
Their ideas might be novel, but they don't contribute in any way to the common good in terms of capabilities or infrastructure. It's becoming nigh impossible to police this, and I fear the endgame is a sea of AI generated slop which will inevitably implode once the truly innovative stuff dies and and people who actually do the work stop doing so.
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
As I may have noted before, humans are the problem.
Last year though I purchased the next book in the series and I am 99% sure it was AI generated. None of the characters behaved consistently, there was a ton of random lewd scenes involving characters from books past. There were paragraphs and paragraphs of purple prose describing the scene but not actually saying anything. It was just so unlike every other book in the series. It was like someone just pasted all the previous books into an LLM and pushed the go button.
I was so shocked and disappointing that I paid good money for some AI slop I've stopped following the author entirely. It was a real eye opener for me. I used to enjoy just taking a chance on a new book because the fact that it made it through publishing at least implied some minimum quality standard, but now I'm really picky about what books I pick up because the quality floor is so much lower than in the past.
printn(n,b) {
extrn putchar;
auto a;
if(a=n/b) /* assignment, not test for equality */
printn(a, b); /* recursive */
putchar(n%b + '0');
}
You'd think we'd have a much better way of expressing the details of software, 50 years later? But here we are, still using ASCII text, separated by curly braces.Nice. I hope you are generating realistic commits and they truly cannot distinguish poison from food.
When you upgrade a library, you made that decision — you know why, you know what it does for you, and you can evaluate the trade-offs before proceeding (unless you're a react developer).
That's not a fog of war, that's delegation.
When an LLM generates your core logic and you can't explain why it works, that's a fundamentally different situation. You're not delegating — you're outsourcing the understanding, and that makes the result not yours.
The benefit of libraries is it's an abstraction and compartmentalization layer. You don't have to use REST calls to talk to AWS, you can use boto and move s3 files around in your code without cluttering it up.
Yeah, sometimes the abstraction breaks or fails, but generally that's rare unless the library really sucks, or you get a leftpad situation.
Having a mental map of your code doesn't mean you know everything, it means you understand how your code works, what it is responsible for, and how it interacts with or delegates to other things.
Part of being a good software engineer is managing complexity like that.
In which case, I kinda disagree. Substandard work is typically submitted by people who don't "get it" and thus either don't understand the standard for work or don't care about meeting it. Either way, any future submission is highly likely to fail the standard again and waste evaluation time.
Of course, there's typically a long tail of people who submit one work to a collection and don't even bother to stick around long enough to see how the community reacts to that work. But those people, almost definitionally, aren't going to complain about being "gatekept" when the work is rejected.
AI agent coding has introduced to writing software a sort of interaction like what brands have been doing to social media.
It used to be that getting to that point required a lot of effort. So, in producing something large, there were quality indicators, and you could calibrate your expectations based on this.
Nowadays, you can get the large thing done - meanwhile the internal codebase is a mess and held together with AI duct-tape.
In the past, this codebase wouldn't scale, the devs would quit, the project would stall, and most of the time the things written poorly would die off. Not every time, but most of the time -- or at least until someone wrote the thing better/faster/more efficiently.
How can you differentiate between 10 identical products, 9 of which were vibecoded, and 1 of which wasn't. The one which wasn't might actually recover your backups when it fails. The other 9, whoops, never tested that codepath. Customers won't know until the edge cases happen.
It's the app store affect but magnified and applied to everything. Search for a product, find 200 near-identical apps, all somehow "official" -- 90% of which are scams or low-effort trash.
The cost of detecting/filtering the poison is many orders of magnitude higher than the cost of generating it.
Honestly: there is SO much media, certainly for entertainment. I may just pretend nothing after 2022 exists.