An app can be like a home-cooked meal: made by an amateur for a small group of people.[0] There is nothing boring about knocking together hyperlocal software to solve a super niche problem. I love Maggie Appleton's idea of barefoot developers building situated software with the help of AI.[1, 2] This could cause a cambrian explosion of interesting software. It's also an iteration of Steve Jobs' computer as a bicycle for the mind. AI-assisted development makes the bicycle a lot easier to operate.
[0] https://www.robinsloan.com/notes/home-cooked-app/
[1] https://maggieappleton.com/home-cooked-software
[2] https://gwern.net/doc/technology/2004-03-30-shirky-situateds...
Ironically, good engineering is boring. In this context, I would hazard that interesting means risky.
I review all the code Claude writes and I don't accept it unless I'm happy with it. My coworkers review it too, so there is real social pressure to make sure it doesn't suck. I still make all the important decisions (IO, consistency, style) - the difference is I can try it out 5 different ways and pick whichever one I like best, rather than spending hours on my first thought, realizing I should have done it differently once I can see the finished product, but shipping it anyways because the tickets must flow.
The vibe coding stuff still seems pretty niche to me though - AI is still too dumb to vibe code anything that has consequences, unless you can cheat with a massive externally defined test suite, or an oracle you know is correct
Now, these days, it’s basically enough to use agent programming to handle all the boring parts and deliver a finished project to the public.
LLMs have essentially broken the natural selection of pet projects and allow even bad or not very interesting ideas to survive, ideas that would never have been shown to anyone under the pre-agent development cycle.
So it’s not that LLMs make programming boring, they’ve allowed boring projects to survive. They’ve also boosted the production of non-boring ones, but they’re just rarer in the overall amount of products
That has always been a problem in software shops. Now it might be even more frequent because of LLMs' ubiquity.
Maybe that's how it should be, maybe not. I don't really know. I was once told by people in the video game industry that games were usually buggy because they were short lived. Not sure if I truly buy that but if anything vibe coded becomes throw away, I wouldn't be surprised.
That process is often long enough to think things through a bit and even have "so what are you working on?" conversations with a friend or colleague that shakes out the mediocre or bad, and either refines things or makes you toss the idea.
In an industry that does not crave bells and whistles, having the ability to refactor, or bring old systems back to speed can make a whole lot of difference for an understaffed, underpaid, unamused, and otherwise cynic workforce, and I am all out for it.
Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.
The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.
I have a report that I made with AI on how customers leave our firm…The first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I could’ve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.
As workers, we need to develop instincts for “plausible but incomplete” and as managers we need to find filters that get rid of the low-effort crap.
There was also a comment [1] here recently that "I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!"
Both of them reminded me of Picasso saying in 1968 that " Computers are useless. They can only give you answers,"
Of course computers are useful. But he meant that they have are useless for a creative. That's still true.
They’re solving small problems or problems that don’t really exist, usually in naive ways. The things being shown are ‘shallow’. And it’s patently obvious that the people behind them will likely not support them in any meaning full way as time goes on.
The rise of Vibe Coding is definitely a ‘cause’ of this, but there’s also a social thing going on - the ‘bar’ for what a Show HN ‘is’ is lower, even if they’re mostly still meeting the letter of the guidelines.
My take:
1. AI workflows are faster - saving people time
2. Faster workflows involve people using their brain less
3. Some people use their time savings to use their brain more, some don't
4. People who don't use their brain are boring
The end effect here is that people who use AI as a tool to help them think more will end up being more interesting, but those who use AI as a tool to help them think less will end up being more boring.
I agree, but the very act of writing out your intention/problem/goal/whatever can crystallize your thinking. Obviously if you are relying on the output spat out by the LLM, you're gonna have a bad time. But IMO one of the great things about these tools is that, at their best, they can facilitate helpful "rubber duck" sessions that can indeed get you further on a problem by getting stuff out of your own head.
But you could learn these new perspectives from AI too. It already has all the thoughts and perspectives from all humans ever written down.
At work, I still find people who try to put together a solution to a problem, without ever asking the AI if it's a good idea. One prompt could show them all the errors they're making and why they should choose something else. For some reason they don't think to ask this godlike brain for advice.
This echos the comments here about enjoying not writing boilerplate. The there is that our minds are programmed to offload work when we can and redirecting all the saved boilerplate to going even deeper on parts of the problem that benefit from original hard thinking is rare. It is much easier to get sucked into creating more boilerplate, and all the gamification of Claude code and incentives of service providers increase this.
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (422 comments)
At least this CEO gets it. Hopefully more will start to follow.
I largely agree that if someone put less work in making a thing, than it takes you to use it, it's probably not going to be useful. But I disagree with the premise that using LLMs will make you boring.
Consider the absurd version of the argument. Say you want to learn something you don't know: would using Google Search make you more boring? At some level, LLMs are like a curated Google Search. In fact if you use Deep Research et al, you can consume information that's more out of distribution than what you _would_ have consumed had you done only Google Searches.
To take coding, to the extent that hand coding leads to creative thoughts, it is possible that some of those thoughts will be lost if I delegate this to agents. But it's also very possibly that I now have the opportunity to think creatively on other aspects of my work.
We have to make strategic decisions on where we want our attention to linger, because those are the places where we likely experience inspiration. I do think this article is valuable in that we have to be conscious of this first before we can take agency.
Despite the title I'm a little more optimistic about agentic coding overall (but only a little).
All projects require some combination of "big thinking" and tedious busywork. Too much busywork is bad, but reducing it to 0 doesn't necessarily help. I think AI can often reduce the tedious busywork part, but that's only net positive if there was an excess of it to begin with - so its value depends on the project / problem domain / etc.
[0]: https://www.da.vidbuchanan.co.uk/blog/boring-ai-problems.htm...
Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.
The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
That is for sure the word of the year, true or not. I agree with it, I think
AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.
Take for example (an extreme example) the paintbrush. Do you care where each bristle lands? No of course not. The bristles land randomly on the canvas, but it’s controlled chaos. The cumulative effect of many bristles landing on a canvas is a general feel or texture. This is an extreme example, but the more you learn about art the more you notice just how much art works via unintentional processes like this. This is why the Trickster Gods, Hermes for example, are both the Gods of art (lyre, communication, storytelling) and the Gods of randomness/fortune.
We used to assume that we could trust the creative to make their own decisions about how much randomness/automation was needed. The quality of the result was proof of the value of a process: when Max Ernst used frottage (rubbing paper over textured surfaces) to create interesting surrealist art, we retroactively re-evaluated frottage as a tool with artistic value, despite its randomness/unintentionality.
But now we’re in a time where people are doing the exact opposite: they find a creative result that they value, but they retroactively devalue it if it’s not created by a process that they consider artistic. Coincidentally, these same people think the most “artistic” process is the most intentional one. They’re rejecting any element of creativity that’s systemic, and therefore rejecting any element of creativity that has a complexity that rivals nature (nature being the most systemic and unintentional art.)
The end result is that the creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets. Their audiences become prey for creative predators. They idolize the art because they see it as something they can’t make, but the truth is there’s always a method by which the creative is cheating. It’s accessible to everyone.
Non-boring people are using AI to make things that are ... not boring.
It's a tool.
Other things we wouldn't say because they're ridiculous at face value:
"Cars make you run over people." "Buzzsaws make you cut your fingers off." "Propane torches make you explode."
An exercise left to the reader : is a non-participant in Show HN less boring than a participant with a vibe coded project?
EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.
The interesting counter-question: can AI make something that wasn't possible before? Not more blog posts, more emails, more boilerplate — but something structurally new?
I've been working on a system where AI agents don't generate content. They observe. They watch people express wishes, analyze intent beneath the words, notice when strangers in different languages converge on the same desire, and decide autonomously when something is ready to grow.
The result doesn't feel AI-generated because it isn't. It's AI-observed. The content comes from humans. The AI just notices patterns they couldn't see themselves.
Maybe the problem isn't that AI makes you boring. It's that most people ask AI to do boring things.
Your preference is no more substantial than people saying "I would never read a book on a screen! It's so much more interesting on paper"
There's nothing wrong with having pretentious standards, but don't confuse your personal aversion with some kind of moral or intellectual high ground.
I was literally just working on a directory of the most common tropes/tics/structures that LLMs use in their writing and thought it would be relevant to post here: https://tropes.fyi/
Very much inspired by Wikipedia's own efforts to curb AI contributions: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Lmk if you find it useful, will likely ShowHN it once polished.
it's at 10 now. note: the article does not say "taste" once
At this point I'd settle if they bothered to read it themselves. There's a lot of stuff posted that feels to me like the author only skimmed it and expects the masses to read it in full.
I remember in the early days of LLMs this was the joke meme. But, now seeing it happen in real life is more than just alarming. It's ridiculous. It's like the opposite of compressing a payload over the wire: We're taking our output, expanding it, transmitting it over the wire, and then compressing it again for input. Why do we do this?
I had a similar realization. My team was discussing whether we should hook our open-source codebases into an AI to generate documentation for other developers, and someone said "why can't they just generate documentation for it themselves with AI"? It's a good point: what value would our AI-generated documentation provide that theirs wouldn't?
It was especially unfortunate because to do its thing, the code required a third party's own personal user credentials including MFA, which is a complete non-starter in server-side code, but apparently the manager's LLM wasn't aware enough to know that.
>AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.
The trigger for the post was about post-AI Show HN, not about about whether vibe-coding is of value to vibe-coders, whatever their coding chops are. For Show HN posts, the sentence I quoted precisely describes the things that would be mind-numbingly boring to Show HN readers.
pre-AI, what was impressive to Show HN readers was that you were able to actually implement all that you describe in that sentence by yourselves and also have some biochemist commenting, "I'm working at a so-and-so research lab and this is exactly what I was looking for!"
Now the biochemist is out there vibe-coding their own solution, and now, there is no way for the HN reader to differentiate your "robust" entry from a completely vibe-code noobie entry, no matter how long you worked on the "important stuff".
Why? because the barrier of entry has been completely obliterated. What we took for granted was that "knowing how to code" was a proxy filter for "thought and worked hard on the problem." And that filter allowed for high-quality posts.
That is why the observation that you know longer can guarentee or have any way of telling quickly that the posters spent some time on the problem is a great observation.
The very value that you gain from vibe-coding is also the very thing that threatens to turn Show HN into a glorified Product Hunt cesspool.
"No one goes there any more, it's too crowded." etc etc
"Gatekeeping" became a trendy term for a while, but in the post-LLM world people are recognizing that "gatekeeping" is not the same as "having a set of standards or rules by which a community abides".
If you have a nice community where anyone can come in and do whatever they want, you no longer have a community, you have a garbage dump. A gate to keep out the people who arrive with bags of garbage is not a bad thing.
We have always relied on superficial cues to tell us about some deeper quality (good faith, willingness to comply with code of conduct, and so on). This is useful and is a necessary shortcut, as if we had to assess everyone and everything from first principles every time things would grind to a halt. Once a cue becomes unviable, the “gate” is not eliminated (except if briefly); the cue is just replaced with something else that is more difficult to circumvent.
I think that brief time after Internet enabled global communication and before LLMs devalued communication signals was pretty cool; now it seems like there’s more and more closed, private or paid communities.
Whoa whoa whoa hold your horses, code has a pretty important property that ordinary prose doesn’t have: it can make real things happen even if no one reads it (it’s executable).
I don’t want to read something that someone didn’t take the time to write. But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do). Really elegant code is cool to read, but many tools I use daily are closed source, so I have no idea if their code is elegant or not. I only care if it works.
Amen to that. I am currently cc'd on a thread between two third-parties, each hucking LLM generated emails at each other that are getting longer and longer. I don't think either of them are reading or thinking about the responses they are writing at this point.
DJing is an interesting example. Compared with like composition, Beatmatching is "relatively" easy to learn, but was solved with CD turntables that can beatmatch themselves, and yet has nothing to do with the taste you have to develop to be a good DJ.
I feel like I can breeze past the easy, time consuming infrastructure phase of projects, and spend MUCH more time getting to high level interesting problems?
For every person like you who puts in actual thought into the project, and uses these tools as coding assistants, there are ~100 people who offload all of their thinking to the tool.
It's frightening how little collective thought is put into the ramifications of this trend not only on our industry, but on the world at large.
There are plenty of times in which people will prefer the technically inferior or less aesthetically pleasing output because of the story accompanying it. Different people select different intention to value, some select for the intention to create an accurate depiction of a beautiful landscape, some select for the intention to create a blurry smudge of a landscape.
I can appreciate the art piece made my someone who only has access to a pencil and their imagination more than someone who has access to adobe CC and the internet because its not about the output to me its about the intention and the story.
Saying I made this drawing implies that you at least sat down and had the intention to draw the thing. Then revealing that you actually used AI to generate it changes the baseline assumption and forces people to re-evaluate it. So its not "finding a creative result that they value, but they retroactively devaluing it if it’s not created by a process that they consider artistic
Sometimes you do, which is why there’s not only a single type of brush in a studio. You want something very controllable if you’re doing lineart with ink.
Even with digital painting, there’s a lot of fussing with the brush engine. There’s even a market for selling presets.
and that's because people have a weird sort of stylistic cargo-culting that they use to evaluate their writing rather than deciding "does this communicate my ideas efficiently"?
for example, young grad students will always write the most opaque and complicated science papers. from their novice perspective, EVERY paper they read is a little opaque and complicated so they try to emulate that in their writing.
office workers do the same thing. every email from corporate is bland and boring and uses far too many words to say nothing. you want your style to match theirs, so you dump it into an AI machine and you're thrilled that your writing has become just as vapid and verbose as your CEO.
No one in their right mind would use one.
Using the wrong tool for the job results in disaster.
It's like watching a guy bang rocks together to "vibe build" a house. Good luck.
Maybe it will make them output better text, but it doesn’t make them better writers. That’d be like saying (to borrow the analogy from the post) that using an excavator makes you better at lifting weights. It doesn’t. You don’t improve, you don’t get better, it’s only the produced artefact which becomes superficially different.
> If you're going to be doing much writing, you should have your own prompt that can help with your voice and style.
The point of the article is the thinking. Style is something completely orthogonal. It’s irrelevant to the discussion.
LLMs and agents work the same way. They’re power tools. Skill and judgment determine whether you build more, or lose fingers faster.
No one finds AI-assisted prose/code/ideas boring, per se. They find bad prose/code/ideas boring. "AI makes you boring" is this generation's version of complaining about typing or cellular phones. AI is just a tool; it's up to humans how to use it.
If they don't care enough to improve themselves at the task in the first place then why would they improve at all? Osmosis?
If this worked then letting a world renown author write all my letters for me will make me a better writer. Right?
Who cares if you're a "good writer?" Are you "easy to understand" is the real achievement.
> They are original to me, and that feels like an insightful moment, and thats about it.
The insight is that good ideas (whether wholly original or otherwise) are the result of many of these insightful moments over time, and when you bypass those insightful moments and the struggle of "recreating" old ideas, you're losing out on that process.
The LLM helps me gather/scaffold my thoughts, but then I express them in my own voice
For what it's worth, the unifying idea behind both is basically a "hazing ritual", or more neutrally phrased, skin in the game. It takes time and energy to look at things people produce. You should spend time and energy making sure I'm not looking at a pile of shit. Doesn't matter if it's a website or prose.
Obviously some people don't. And that's why the signal to noise ratio is becoming shit very quickly.
Rest of the world: "No, we're gatekeeping because we think the result isn't good."
If someone can cajole their LLM to emit something worthwhile, e.g. Terence Tao's LLM generated proofs, people will be happy to acknowledge it. Most people are incapable of that and no number of protestations of gatekeeping can cover up the unoriginality and poor quality of their LLM results.
AI is almost the exact opposite. It's verbose fluff that's only superficially structured well. It's worse than average
(waiting for someone to reply that I can tell the AI to be concise and meaningful)
This is a point that often results in bad faith arguments from both AI enthusiasts and AI skeptics. Enthusiasts will say "everything is a remix and the most creative works are built on previous works" while skeptics will say "LLMs are stochastic parrots and cannot create anything new by technical definition".
The truth is somewhere in the middle, which unfortunately invokes the Golden Mean Fallacy that makes no one happy.
This behavior has already been happening with Pangram Labs which supposedly does have good AI detection.
I wonder if this is a major differentiator between AI fans and detractors. I dislike and actively avoid anything closed source. I fully agree with the premise of the submission as well.
"One" is the operative word here, supposing this includes only humans and excludes AI agents. When code is executed, it does get read (by the computer). Making that happen is a conscious choice on the part of a human operator.
The same kind of conscious choice can feed writing to an LLM to see what it does in response. That is much the same kind of "execution", just non-deterministic (and, when given any tools beyond standard input and standard output, potentially dangerous in all the same ways, but worse because of the nondeterminism).
For example, I have a few letter generators on my website. The letters are often verified by a lawyer, but the generator could totally be vibe-coded. It's basically an HTML form that fills in the blanks in the template. Other tools are basically "take input, run calculation, show output". If I can plug in a well-tested calculation, AI could easily build the rest of the tool. I have been staunchly against using AI in my line of work, but this is an acceptable use of it.
But isn't this the distinction that language models are collapsing? There are 'prose' prompt collections that certainly make (programmatic) things happen, just as there is significant concern about the effect of LLM-generated prose on social media, influence campaigns, etc.
In the arts the differentiators have always been technical skill, technical inventiveness, original imagination, and taste - the indefinable factor that makes one creative work more resonant than another.
AI automates some of those, often to a better-than-median extent. But so far taste remains elusive. It's the opposite of the "Throw everything in a bucket and fish out some interesting interpolation of it by poking around with some approximate sense of direction until you find something you like" that defines how LLMs work.
The definition of slop is poor taste. By that definition a lot of human work is also slop.
But that also means that in spite of the technical crudity, it's possible to produce interesting AI work if you have taste and a cultivated aesthetic, and aren't just telling the machine "make me something interesting based on this description."
"Gatekeeping" is NOT when you require someone to be willing learn a skill in order to join a community of people with that skill.
And in fact, saying "you are too stupid to learn that on your own, use an AI instead" is kind of gatekeeping on its own, because it implicitly creates a shrinking elite who actually have the knowledge (that is fed to the AI so it can be regurgitated for everyone else), shutting out the majority who are stuck in the "LLM slum".
If it's not worth reading something where the writer didn't take the time to write it, by extension that means nobody read the code.
Which means nobody understands it, beyond the external behaviour they've tested.
I'd have some issues with using such software, at least where reliability matters. Blackbox testing only gets you so far.
But I guess as opposed to other types of writing, developers _do_ read generated code. At least as soon as something goes wrong.
I gotta disagree with you there! Code that isn't read doesn't do anything. Code must be read to be compiled, it must be read to be interpreted, etc.
I think this points to a difference in our understanding of "read" means, perhaps? To expand my pithy "not gonna read if you didn't write" bit: The idea that code stands on its own is a lie. The world changes around code and code must be changed to keep up with the world. Every "program" (is the git I run the same as the git you run?) is a living document that people maintain as need be. So when we extend the "not read / didn't write" it's not using the program (which I guess is like taking the lessons from a book) it's maintaining the program.
So I think it's possible that I could derive benefit from someone else reading an llm's text output (they get an idea) - but what we are trying to talk about is the work of maintaining a text.
It works, sure, but is it worth your time to use? I think a common blind spot for software engineers is understanding how hard it is to get people to use software they aren’t effectively forced to use (through work or in order to gain access to something or ‘network effects’ or whatever).
Most people’s time and attention is precious, their habits are ingrained, and they are fundamentally pretty lazy.
And people that don’t fall into the ‘most people’ I just described, probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need. UNLESS it’s something very novel that came from a bit of innovation that LLMs are incapable of. But that bit isn’t what we are talking about here, I don’t think.
The real cost isn't the tokens, it's the attention debt. Every CC'd person now has to triage whether any of those paragraphs contain an actual decision or action item. In my experience running multiple products, the signal-to-noise ratio in AI-drafted comms is brutal. The text looks professional, reads smoothly, but says almost nothing.
I've started treating any email over ~4 paragraphs the same way I treat Terms of Service — skim the first sentence of each paragraph and hope nothing important is buried in paragraph seven.
The most recent one I remember commenting on, the poor guy had a project that basically tried to "skip" IaC tools, and his tool basically went nuts in the console (or API, I don't remember) in one account, then exported it all to another account for reasons that didn't make any sense at all. These are already solved problems (in multiple ways) and it seemed like the person just didn't realize terraformer was already an existing, proven tool.
I am not trying to say these things don't allow you to prototype quickly or get tedious, easy stuff out of the way. I'm saying that if you try to solve a problem in a domain that you have no expertise in with these tools and show other experts your work, they may chuckle at what you tried to do because it sometimes does look very silly.
I see a lot of these discussions where a person gets feelings/feels mad about something and suddenly a lot of black and white thinking starts happening. I guess that's just part of being human.
It's a fantastic editor!
Creativity often requires reasoning in unusual ways, and evaluating those ideas requires learning. The first part we can probably get LLMs to do; the latter part we can't (RL is a separate process and not really scalable).
Even without any of that, you can prompt your way into new things. I'm building a camper out of wood, and I've gotten older LLM models to make novel camper designs just by asking it questions and choosing things. You can make other AI models make novel music by prompting it to combine different aspects of music into a new song. Human creativity works that way too. Think of all the failed attempts at new things that humans come up with before a good one actually sticks.
AI is a bicycle, not a motorcycle.
"You're describing the default output, and you're right — it's bad. But that's like judging a programming language by its tutorial examples.
The actual skill is in the prompting, editing, and knowing when to throw the output away entirely. I use LLMs daily for technical writing and the first draft is almost never the final product. It's a starting point I can reshape faster than staring at a blank page.
The real problem isn't that AI can't produce concise, precise writing — it's that most people accept the first completion and hit send. That's a user problem, not a tool problem."
But what I'm replying to, and the vast majority of the AI denial I see, is rooted in a superficial, defensive, almost aesthetic knee jerk rejection of unimportant aspects of human taste and preference.
The WIP features measure breadth and density of these tropes, and each trope has frequency thresholds. Also I don't use AI to identify AI writing to avoid accusatory hallucinations.
I do appreciate the feedback though and will take it into consideration.
What is that? Do you think PhDs have some special way of talking about things?
As much as I'd like to know whether a text was written by a human or not, I'm saddened by the fact that some of these writing patterns have been poisoned by these tools. I enjoy, use, and find many of them to be an elegant way to get a point across. And I refuse to give up the em dash! So if that flags any of my writing—so be it.
I would argue good ideas are not so easy to find. It is harder than it seems to fit the market, and that is why most of apps fail. At the end of the day, everyone is blinded by hubris and ignorance... I do include myself in that.
This is something I like about the LLM future. I get to spend my time with users thinking about their needs and how the product itself could be improved. The AI can write all the CSS and sql queries or whatever to actually implement those features.
If the interesting thing about software is the code itself - like the concepts and so on, then yeah do that yourself. I like working with CRDTs because they’re a fun little puzzle. But most code isn’t like that. Most code just needs to move some text from over here to over there. For code like that, it’s the user experience that’s interesting. I’m happy to offload the grunt work to Claude.
I got Claude to make a test suite the other day for a couple RFCs so I could check for spec compliance. It made a test runner and about 300 tests. And an html frontend to view the test results in a big table. Claude and I wrote 8500 lines of code in a day.
I don’t care how the test runner works, so long as it works. I really just care about the test results. Is it finding real bugs? Well, we went though the 60 or so failing tests. We changed 3 tests, because Claude had misunderstood the rfc. The rest were real bugs.
I’m sure the test runner would be more beautiful if I wrote it by hand. But I don’t care. I’ve written test runners before. They’re not interesting. I’m all for beautiful, artisanal code. I love programming. But sometimes I just want to get a job done. Sometimes the code isn’t for reading. It’s for running.
This is also the case for AI generated projects btw, the backend projects that I’ve been looking at often contains reimplementations of common functionality that already exists elsewhere, such as in-memory LRU caches when they should have just used a library.
But AI does the code. Well... usually.
People call my project creative. Some are actually using it.
I feel many technical things aren't really technical things they are simply a problem where "have a web app" is part of the solution but the real part of the solution is in the content and the interaction design of it, not in how you solved the challenge technically.
Every single time I have vibe coded a project I cared about, letting the AI rip with mild code review and rigorous testing has bit me in the ass, without fail. It doesn't extend it in the taste that I want, things are clearly spiraling out of control, etc. Just satisfying some specs at the time of creation isn't enough. These things evolve, they're a living being.
"the author (pilot?) hasn't generally thought too much about the problem space, and so there isn't really much of a discussion to be had. The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective."
We tell stories of Therac 25 but 90% of software out there doesn’t kill people. Annoys people and wastes time yes, but reliability doesn’t matter as much.
E-mail, internet and networking, operations on floating point numbers are only kind of somewhat reliable. No one is saying they will not use email because it might not be delivered.
Sure... to a point. But realistically, the "use an LLM to write it yourself" approach still entails costs, both up-front and on-going, even if the cost may be much less than in the past. There's still reason to use software that's provided "off the shelf", and to some extent there's reason to look at it from a "I don't care how you wrote it, as long as it works" mindset.
came from a bit of innovation that LLMs are incapable of.
I think you're making an overly binary distinction on something that is more of a continuum, vis-a-vis "written by human vs written by LLM". There's a middle ground of "written by human and LLM together". I mean, the people building stuff using something like SpecKit or OpenSpec still spend a lot of time up-front defining the tech stack, requirements, features, guardrails, etc. of their project, and iterating on the generated code. Some probably even still hand tune some of the generated code. So should we reject their projects just because they used an LLM at all, or ?? I don't know. At least for me, that might be a step further than I'd go.
You might be on to something. Maybe its self-selection (as in people who want to engage deeply with a certain topic but lack domain expertise might be more likely to go for "vibecodable" solutions).
It's about being oblivious, I suppose. Not too different to claiming there will be no need to write new fiction when an LLM will write the work you want to read by request.
Boring is suppose to be boring for the sake of learning. If you're bored then you're not learning. Take a look back at your code in a weeks time and see if you still understand what's going on. Top level maybe, but the deep down cog of the engine of the application, doubt so. Not to preach but that's what I've discovered.
Unless you already have the knowledge, then fine. "here's my code make it better" but if it's the 14th time you've written the ring buffer, why are you not using one of the previous thirteen versions? Are you saying that the vibed code is more superior then your own coding?
How then is it different from the Wikipedia page you linked?
This post is an elaboration on a comment I made on Hacker News recently, on a blog post that showed an increase in volume and decline in quality among the “Show HN” submissons.
I don't actually mind AI-aided development, a tool is a tool and should be used if you find it useful, but I think the vibe coded Show HN projects are overall pretty boring. They generally don't have a lot of work put into them, and as a result, the author (pilot?) hasn't generally thought too much about the problem space, and so there isn't really much of a discussion to be had.
The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective.
I feel like this is what AI has done to the programming discussion. It draws in boring people with boring projects who don’t have anything interesting to say about programming.
This isn’t something that is limited to Show HN or even Hacker News, it’s something you see everywhere.
While part of this phenomenon is likely just an upswing of people who don’t usually do programming that get swept up in the fun of building a product, I want to build an argument that it’s much worse than that.
AI makes people boring.
AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
This may be a feature if you are exploring a topic you are unfamiliar with, but it’s a fatal flaw if you are writing a blog post or designing a product or trying to do some other form of original work.
Some will argue that this is why you need a human in the loop to steer the work and do the high level thinking. That premise is fundamentally flawed. Original ideas are the result of the very work you’re offloading on LLMs. Having humans in the loop doesn’t make the AI think more like people, it makes the human thought more like AI output.
The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Ideas are then further refined when you try to articulate them. This is why we make students write essays. It’s also why we make professors teach undergraduates.
Prompting an AI model is not articulating an idea. You get the output, but in terms of ideation the output is discardable. It’s the work that matters.
You don’t get build muscle using an excavator to lift weights. You don’t produce interesting thoughts using a GPU to think.
And to answer you more directly, generally, in my professional world, I don't use closed source software often for security reasons, and when I do, it's from major players with oodles of more resources and capital expenditure than "some guy with a credit card paid for a gemini subscription."
Believe me I've had to adjust my writing a lot to avoid these tells, even academics I know are second guessing everything they've ever been taught. It's quite sad but I think it will result in a more personable internet as people try to distinguish themselves from the bots.
I feel like I've been around these parts for a while, and that is not my experience of what Show HN was originally about, though I'm sure there was always an undercurrent of status hierarchy and approval-seeking, like you suggest.
As we give more and more autonomy to agents, that % may change. Just yesterday I was looking at hexapods and the first thing it tells you ( with a disclaimer its for competitions only ) that it has a lot of space for weapon install. I had to briefly look at the website to make sure I did not accidentally click on some satirical link.
There might be other professions where people get more hung up on formalities but my partner works in a non-tech field and it's the same way there. She's far more likely to get an email dashed off with a sentence fragment or two than a long formal message. She has learned that short emails are more likely to be read and acted on as well.
I was dabbling in consulting infrastructure for a bit, often prospects would come to me with stuff like this "well I'll just have AI do it" and my response has been "ok, do that, but do keep me in mind if that becomes very difficult a year or two down the road." I haven't yet followed up with any of them to see how they are doing, but some of the ideas I heard were just absolute insanity to me.
Exactly this. Finding that annoying bug that took 15 browser tabs and digging deep into some library you're using, digging into where your code is not performant, looking for alternative algorithms or data structures to do something, this is where learning and experience happen. This is why you don't hire a new grad for a senior role, they have not had time to bang their heads on enough problems.
You get no sense of how or why when using AI to crank something out for you. Your boss doesn't care about either, he cares about shipping and profits, which is the true goal of AI. You are an increasingly unimportant cog in that process.
It's okay not to memorize everything involved in a software project. Sometimes what you want to learn or experiment with is elsewhere, and so you use the AI to handle the parts you're less interested in learning at a deep and intimate level. That's okay. This mentality that you absolutely have to work through manually implementing everything, every time, even when it's not related to what you're actually interested in, wanted to do, or your end-goal, just because it "builds character" is understandable, and it can increase your generality, but it's not mandatory.
Additionally, if you're not doing vibe coding, but sort of pair-programming with the AI in something like Zed, where the code is collaboratively edited and it's very code-forward — so it doesn't incentivize you to stay away from the code and ignore it, the way agents like Claude Code do — you can still learn a ton about the deep technical processes of your codebase, and how to implement algorithms, because you can look at what the agent is doing and go:
"Oh, it's having to use a very confusing architecture here to get around this limitation of my architecture elsewhere; it isn't going to understand that later, let alone me. Guess that architectural decision was bad."
"Oh, shit, we used this over complicated architecture/violated local reasoning/referential transparency/modularity/deep-narrow modules/single-concern principles, and now we can't make changes effectively, and I'm confused. I shouldn't do that in the future."
"Hmm, this algorithm is too slow for this use-case, even though it's theoretically better, let's try another one."
"After profiling the program, it's too slow here, here, and here — it looks like we should've added caching here, avoided doing that work at all there, and used a better algorithm there."
"Having described this code and seeing it written out, I see it's overcomplicated/not DRY enough, and thus difficult to modify/read, let's simplify/factor out."
"Interesting, I thought the technologies I chose would be able to do XYZ, but actually it turns out they're not as good at that as I thought / have other drawbacks / didn't pan out long term, and it's causing the AI to write reams of code to compensate, which is coming back to bite me in the ass, I now understand the tradeoffs of these technologies better."
Or even just things like
"Oh! I didn't know this language/framework/library could do that! Although I may not remember the precise syntax, that's a useful thing I'll file away for later."
"Oh, so that's what that looks like / that's how you do it. Got it. I'll look that up and read more about it, and save the bookmark."
> Unless you already have the knowledge, then fine. "here's my code make it better" but if it's the 14th time you've written the ring buffer, why are you not using one of the previous thirteen versions? Are you saying that the vibed code is more superior then your own coding?
There are a lot of reasons one might not be able to, or want to, use existing dependencies.
I assume you use JavaScript? TypeScript or Go perhaps?
Pfft, amateur. I only code in Assembly. Must be boring for you using such a high-level language. How do you learn anything? I bet you don't even know what the cog of the engine is doing.
Absolutely, but I’d categorize that ‘bit’ as the innovation from the human. I guess it’s usually just ongoing validation that the software is headed down a path of usefulness which is hard to specify up-front and by definition something only the user (or a very good proxy) can do (and even they are usually bad at it).
I made it because at that point in my career I simply didn't know that ansible existed, or cloud solutions that were very cheap to do the same thing. I spent a crazy amount of effort doing something that ansible probably could have done for me in an afternoon. That's what sometimes these projects feel like to me. It's kind of like a solution looking for a problem a lot of the time.
I just scanned through the front page of the show HN page and quickly eyeballed several of these type of things.
I applaud your optimism, but I think the internet is a lost cause. Humans who value communicating with other humans will need to retreat into niche communities with zero tolerance for bots. Filtering out bot content will likely continue to be impossible, but we'll eventually settle on a good way to determine if someone is human. I just hope we won't have to give up our privacy and anonymity for it.
Agreed.
> I made it because at that point in my career I simply didn't know that ansible existed
Channels Mark Twain. "Sorry for such a long letter, i didn't have the time to make it shorter."
I write software when the scripts are no longer suitable.
What is your hoped for outcome here man? To come off like enough of a jerk or obtuse enough that people just abandon the thread and you can declare victory?