The other issue of HN being inundated with AI bots is related, but a kind of different problem.
When the surface dwellers have become crazed by disease and war, and their lands contaminated with the detritus of broken promises of innovation and heavy metals, we must build a new Eden.
As much as I adore Gemini as a concept, I yearn to express myself in the visual medium. Dillo might honestly be enough to render something beautiful within its constraints. With Wireguard meshes as the transport, and invitations offered and withdrawn by personal trust, perhaps we can have a place where our ideas could once again flourish without being amplified and distilled into mediocrity by the great monoliths looming like thunderous currents on the horizon.
https://news.ycombinator.com/showlim (<-- this is what many accounts without much HN history now see, and it's responsible for the downtick to the right on OP's chart)
Ask HN: Please restrict new accounts from posting - https://news.ycombinator.com/item?id=47300329 - March 2026 (515 comments)
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (425 comments)
I use LLM models in my side projects like this guy uses them. So many times I spent days and weeks on a side project just to make sure it was perfect only to to have 0 interest from anyone else after sharing.
Models have their own archetypes. Since early this year almost every vibecoded website is Opus, which has its own style. It has different characteristics from a website by GPT. Yet again different from one by Gemini. Each one has its own set of traits. Opus 4.5/4.6 traits are markedly different from earlier versions. Mixing them all into one and then using it to "identify AI coded websites" doesn't work.
so, n=1 plus Baader-Meinhof? (https://en.wikipedia.org/wiki/Frequency_illusion)
But good thing is, it will now include those accessibility items, too. Personally I have misokinesia and migraines so I get it.
Here's what it found if you want to see: https://www.perplexity.ai/search/given-these-how-can-we-crea...
At least in the field I work in (ecommerce/retail), design is often what separates one brand from another when presenting their products. Maybe it won't happen on the web as much in the future, but I suspect it will still be important when it comes to visually communicating to consumers
The UI of Electric Minds Reborn (Amsterdam Web Communities System) was not AI-generated. At most, it was AI translated, as I used Claude to help turn old clunky 2006-era HTML into modern styling with Tailwind CSS. See also https://erbosoft.com/blog/2026/04/07/to-ai-or-not-to-ai/.
This has been killing me recently. Apparently I need slightly higher contrast than some people, and these vibe coded UIs are basically unreadable to my eyes
It’s entirely possible a Show HN I posted is included and I’d love to know how it scored.
Why? Let me guess: because these patterns were frequently seen in human-made sites too, but that won't fit the narrative.
Remember, several AI detectors claimed Declaration Of Independence was AI-generated[0]. Keep this info in mind when someone (like the author of this article) proudly shows you their home-made AI detector.
[0]: https://dallasexpress.com/state/zerogpt-flags-1836-texas-dec...
I'm much more critical of closed-source, subscription, wrappers over open source software of simple prompts.
- all designs are going to be AI generated and look the same
- well unless you ask your agent to make it look different
Are we going to call 'AI slop' everything that doesn't reinvent design from zero for a marketing page?
In a sense it shows that the creator didn’t care enough to make their UI/presentation unique which causes some like me to question exactly how much effort they bothered to put in at all.
As part of our code security review we have a “sloppification” score. Higher numbers have been reliably usable by people like me as indicators of what to focus my pentesting efforts on.
Before the usual suspects get snarky: Does that mean AI only generates slop? No. But it is an indicator of effort and oversights.
There is a longterm phenomenon, that quite a lot of pages are presented here, and not existent anymore after 12 months or so... This was already the case before the whole ai slop flodded in... But since then the rate just grew massively.
It's particularly annoying, when there is an actually useful service or app, you sign up, after a couple of months all is gone...
They're also the ideal place to try out new AI tools that your professional work might not let you experiment with.
(The headline of this piece doesn't really do it justice - it misuses "vibe coded" and fails to communicate that the substance of the post is about visual design traits common with AI-generated frontends, which is a much more interesting conversation to be having. UPDATE: the headline changed, it's now much better - "Show HN submissions tripled and now mostly have the same vibe-coded look" - it was previously "Show HN submissions tripled and are now mostly vibe-coded")
(maybe what this post calls "Icon-topped feature card grid." ...that might be the official design pattern term)
In 2016, if I saw 10,000 lines of code, that carried a certain proof-of-work with it. They probably couldn't help but give the code some testing as they were working up to that point. We know there has to have been a certain amount of thought in it. They've been living with it for some months, guaranteed.
In 2026, 10,000 lines of code means they spent a minimum amount of money on tokens. 10,000 lines can be generated pretty quickly in a single task, if it's something like "turn this big OpenAPI spec into an API in my language". It's entirely possible 90%+ of the project hasn't actually been tested, except by the unit tests the AI wrote itself, which is a great start, but not more than that for code that hasn't ever actually run in any real scenario from the real world.
Nothing about any of that in intrinsically wrong. But the standards have to be shifted. While the bar for a "Show HN" should perhaps not be high, it should probably be higher than "I typed a few things into a text box". And that not because that's necessarily "bad" either, but because of the mismatch between valuable human attention and the cheapness of being able to make a draw on it.
It's kind of a bummer in some sense... but then again, honestly, the space of things that can be built with an idea and a few prompts to an AI was frankly fairly well covered even before AI coding tools. Already I had a list of "projects we've already seen a lot of so don't expect the community to shower you with adulation" for any language community I've spent any significant time in. AI has grown the list of "projects I've seen too many times" a bit, but a lot of what I've seen is that we're getting an even larger torrent of the same projects we already had too many of before.
We can hope the LLMs hallucinate slightly different CSS once in a while now...
http://www.catb.org/jargon/html/S/September-that-never-ended... https://en.wikipedia.org/wiki/Eternal_September
The advantage of having so many ideas being tried and published is we are exploring the space of possibility faster, and so there's more to learn from. The disadvantage is that signal to noise is way down. Also, because the system is self-reflective and dynamic, there's a natural downward spiral as the common spaces get overrun and we cannot coordinate signal. The Tragedy of the Commons.
I guess I spent 10 years worrying about this in my MeatballWiki era in my 20s, and now I'm in my midlife crisis era and prefer to just have fun with the world that I have.
at my workplace the phrase in status/report-out meetings "I built" now means "I asked claude to build"
All of a sudden managers, architects (who haven't written code in a decade), and directors are all building tools
so now we're debugging the tools "they built" and why our product isn't working with them.
Nooo please don't ruin great fonts by associating them with low effort vibecoding
They may be somewhat overused but they are popular for a reason
Heavy slop (5+ patterns) · 105 sites · 21%
Mild (2–4) · 230 sites · 46%
Clean (0–1) · 165 sites · 33%
Can we have a list of the "clean" ones please? Actually, if you give me a list of the IDs for all 3 categories, I'll make URLs for each that people can browse.If the community feels that the division is useful, then we can maybe take you up on your offer to open-source the project, and perhaps find a way to use it on HN itself.
I wouldn't use it because one of the reasons that I do side projects is to enjoy myself and learn new things, and these tools tend to do much of the stuff that I enjoy and learn from.
I'm primarily a backend developer. Most of my work has been in serving json or occasionally xml. Spring Shell in Java is something that I'm closer to working with than a GUI. When I've done web work, the most complimentary thing that was said about my design is "spartan".
So, if I was to have a web facing personal project... would black text on a white background with the default font and clunky <form> elements be ok? I know we are ok with it on the HN Settings page. They work... but they don't meet what I perceive other people have as minimum standards for web facing interfaces today.
And so... if I was to have some web facing project that I wanted to show to others, I'd probably work with some AI tooling to help create a gui, and it would very likely have the visual design traits that other AI generated front ends have.
I find that I just don't learn anything new from Show HN vibe-coded side projects, and I can often replicate them in a couple of hundred of dollars, so why bother looking at them? Also why bother sharing one in the first place, since it doesn't really show any personal prowess, and doesn't bring value to the community due to it being easy to replicate?
There's always a trend and everyone follows them in Software. Now it's AI.. let's not pretend cutting corners is anything new in our industry.
I guess you can always gloat about your artisan code but people who use Software for business never cared about that to begin with.
Plus, wasn't the entire philosophy of CS was that "everyone can code" ? Opposing licensing requirements, etc ? Well.. there you have it, code is a commodity now and the barrier to entry is next to none.
This is a manipulative combination of condescension, gaslighting and emotionalization.
"It's fun to complain" trivializes and dismisses a valid observation about the content being submitted as self-indulgent whining.
"I'd rather face the world" implies that people who want to see carefully constructed projects and human-written articles about them are refusing to face the world, i.e. delusional.
"Find the joy in it" reduces the whole discussion to the question of self-imposed mindset, as if there is no possible rational reason to be unhappy about what's going on.
(Still plenty of scary stuff, but I should feel like you at least some of the time, healthy balance.)
It seems many have not updated their understanding to match today’s capabilities.
I am vibe coding.
That does not mean I am incompetent or that the product will be bad. I have 10 years of experience.
Using agentic AI to implement, iterate, and debug issues is now the workflow most teams are targeting.
While last year chances were slim for the agent to debug tricky issues, I feel that now it can figure out a lot once you have it instrument the app and provide logs.
It sometimes feels like some commenters stick with last year’s mindset and feel entitled to yell about ‘AI slop’ at the first sign of an issue in a product and denigrate the author’s competence.
maybe i'm an LLM too
Shadcn works for Vercel, but is actually a human being (I think?).
The UI framework is called shadcn/ui.
In a climate where it seems like VC are woefully bereft of the same skills, there's an impetus to just slop garbage up for any vague idea, without taking the care or time to polish it into something which has that intangibly human sense of greatness and clarity.
I see, you've done something -- but why? If you continue to ask this question, you will arrive at good science ... but many submissions are not aimed at that level of communication or stop far ahead of the point at which the question becomes interesting.
There's that phrase: "better to remain silent and be thought a fool than to speak and to remove all doubt" which strikes as poignant, except it seems like the audience today are also fools ... the inmates are running the asylum.
Likewise, the issue is often that many of these projects show no evidence of long term maintenance. That might be the new signal we watch for?
There also used to be a sense in the tech community of "if you build it they will come" and that has been basically completely lost at this point. Between the discussion earlier this week of people's fraudulent GH stars, and this topic, and the wave of submissions I see on e.g. r/rust, it's just hard to imagine how -- as a pure "tech nerd" -- to get eyes or assistance on projects these days.
I have projects I've held off on "Show HN" for years because I felt I wasn't ready for the flood of users or questions and criticisms. Maybe the jokes on me. (Of course like everyone else these days, I've used AI to work on them, but much of them predate agentic tools.)
It depends what your goals are. All of my side projects were started because I wanted to learn something. Using a "skip to the end" button wouldn't really make sense for me.
The number of dark‑mode sites I’ve seen where the text (and subtext) are various shades of dark brown or beige is just awful. For reference, you want a contrast ratio between the text and background of at least ~4:1 to be on the safe side.
This isn't even that hard to fix - hell you can add the Web Content Accessibility Guidelines to a skill.
Besides, the idea of paying 200$/month to have the privilege of using ai in my side projects… it’s just stupid for me
AI might (might not, but often does!) also save you from doing original thinking in the domain, which in a show my side project is what people are interested in
What is the urgency in completing side projects? Commercial projects are usually the ones where you have some urgency.
It is also very fun to tackle hard engineering problems.
I enjoy both, and tend to oscillate between wanting to do a lot of one, or a lot of the other. I do recognize that I've been coding for so long that it's much more exciting to be solving "product problems" rather than "engineering problems", I suspect mostly because it's the area I've explored the least (of the two).
And there is a LOT to learn about a domain while you're working on the problem, even without even looking at the code.
I was surprised to realize that some of my friends don't share this sentiment. They take very little pleasure from being product developers, and instead really just enjoy being engineers who work on the code and the architecture. There's absolutely nothing wrong with that, I just found it very surprising. To be honest, I guess perhaps what I found the most surprising is that I am not one of those people?
And when you get your product in the hands of users can finally get that direct feedback line to/from them and can start working on the problems they find and thinking of product (not necessarily engineering) solutions for them? Man, that's so satisfying. It's like falling in love with coding all over again.
If it helps compare, you might have a full desire to manage a tricky server and all the various parts of it. It’d be removing the fun to just put a site on GitHub pages rather than hosting it on a pdp11. But if you want to show off your demo scene work you wouldn’t feel like you’d missed out on the fun just putting things up on a regular site.
I anticipate that people with a builder spirit and strong technical background are going to be able to build awesome things in the future. What the Fabrice Bellard or John Carmack of today will be able to build?
(I've not landed on a good solution yet, ollama+opencode kinda works but there are often problems with parsing output and abrupt terminations - I'm sure some of it is the models, some the config, some my pitiful rtx 5090 16gb, and some are just bugs...)
I have a long list of projects that I have thought about but never implemented because of lack of time and energy. LLMs have made that happen.
I like designing programming languages and developing parsers/compilers and virtual machines. But the steps beyond type-checking are so incredibly boring (and I don't like using C or LLVM as targets) that I have done the front end 15-20 times over the last couple of decades and the back end only 3-4 times.
This time, I spent two weeks developing a spec for the VM, including concurrency, exception handling and GC. And I led the AI through each subsystem till I was satisfied with the result. I now have a VM that is within 8x of C in tight loops. Without JIT. It is incredible to be able to allocate arrays of 4B elements and touch each element at random, something that would make python cry.
Working on the compiler now.
I don't think this is overwhelmingly the reason though - I think many are just all AI, but if the project is technically interesting it might be sufficient to get me to grimace through it.
That's basically the entire AI landscape atm.
I keep seeing people do things like spend a weekend building a product then charging ridiculous prices for it with the justification that it's what those products would've cost a few years ago.
For some reason, it doens't click for them that those prices were a reflection of the effort it took to get to that point and that the situation has changed.
There's a lot of ways things can be of interest. The problem being solved, how it's being solved, the UI, UX, etc.
THAT it is vibe coded may or may not be interesting to some, but finding it un-interesting because it's vibe coded is no better than finding that it is.
I'd challenge the lack of personal prowess argument. Piecing together technology in novel ways to solve highly targeted problems is a skill, even if you're not hand-crafting CSS and SQL.
I liken it to those who tune cars, who buy cars made in a factory, install parts made by someone else, using tools that are all standardized. In the middle somewhere is the human making decisions to create a final result, which is where the talent exists.
No amount of denial will roll back the technology that millions can use now, that makes it realistic to produce in a day software that would take at least months five years ago.
I've noticed a crazy amount of clearly AI coded projects that do a small subset of an already existing and very trusted open source project. Comments usually point this out, and the OP never responds. I'm not sure what the end goal is, but the whole thing feels like a waste of time for everybody involved.
Each clause you've highlighted has a nugget of truth, but that nugget is not inherently negative, it's just a different perspective which you aren't picking up on.
Indicative in my dictionary doesn't mean definitive. It just makes it much more likely. You can make quality products while LLMs write >99% of the code. This has been possible for more than a year, so it's not a lack of updating of beliefs that is the issue. I've done so myself. Rather, 90% of above products are low quality, at a much higher rate than say, 2022, pre-GPT. As such, it's an indicator. That 10% exists, just like pearls can hide in a pile of shit.
As others have said the reason is time investment. You can takes 2 months to build something where the LLM codes 99%. Or you can take 2 hours. HN, and everywhere else, is flooded by the latter. That's why it's mostly crap. I did the former. And luckily it led to a good result. Not a coincidence.
This applies far beyond coding. It applies to _everything_ done with LLMs. You can use them to write a book in 2 hours. You can use them to write a book in 2 years.
My idea was that, if I see that your project is growing 10k LOC per week and you're the only developer working on it, it's most likely vibe-coded.
I analyzed some open-source projects, but unfortunately it turns out not to be so clear cut. It's relatively easy to estimate the growth rate of a project, but figuring out how much time developers worked on it is very error prone, which results in both false positives and false negatives.
I wrote a post about it (https://pscanf.com/s/352/) if you're interested in the details.
And then look through the commits -- were they only adding new features, or did the author(s) put effort into improvements on engineering fundamentals (benchmarking, testing, documentation, etc)?
Accessibility that can be had on client side should not be a concern on server side.
Before, it was like:
"Oh, X idea is really cool, let me try it!" ... (loses interest before idea validated)
Now: "Oh, X idea is really cool, let me try it!" ... with AI, I get to actually validate that it works (ideally), or reformulate the idea if it doesn't.
The trick is to deliberately use it in a way that helps you learn.
I don't think it's just the base rate of rounded corners though, these posts feel like the AI tends to spit out a bullet point list of features, like you'd see on an AI readme where each feature has a tangential emoji, then for a website puts them in a grid of rounded rects
Most of my time has been spent fitting abstractions together, trying to find meaningful relationships in a field that is still somewhat ill-defined. I suppose I could have thrown lots of cash at it and had it 'done' in a weekend, but I hate that idea.
As it stands, I know what works and what doesn't (to the degree I can, I'm still learning, and I'll acknowledge I'm not super knowledgeable in most things) but I'm trying to apply what I know to a domain I don't readily understand well.
There could be many reasons to not use ai in a case like this, eg: retaining more control, breaking some new ground, because it’s fun, because it’s personal, etc.
In other words, I've found people like the above to think of LLMs as fairly static, as if we couldn't change their behavior with a simple sentence, instead of complaining about it. It's strange, to me at least.
A hundred self-thought devs not implementing accessibility standards is a different problem than a school teaching 100 students lacking these standards in its curriculum.
Plus given time constraints, they generally wouldn't try to cram huge amounts of tiny text into every visible inch of the page without some intentional reason to do so (using that somewhat hard to read console-ish font Claude seems to love as a default).
Maybe the dark mode/terminal font/high text density look presents as "cool looking" at first glance for one-shotting evals so they've all converged on it. But to OP's point, this seems like a solvable (or at least mitigable) issue if models or harnesses were concerned about it.
“Don’t have bad vision if you didn’t want to be technical!”
(came across that way)
Those of us who care that technology be accessible to as many people as possible, such as low vision users, find it relevant. You can ignore it if you wish.
See Rawls 'Original Position' on why you should care: https://en.wikipedia.org/wiki/Original_position
This.
Coding assistants handle a great deal of the drudge work involved in refactoring. I find myself doing far more deep refactoring work as quick proofs of concept than before. It's also quite convenient to have coding assistants handle troubleshooting steps for you.
I do think though if I were to delegate the API itself to AI and say something like the code doesn't matter, the high level thinking would suffer from lack of pain working through implementation details.
> Piecing together technology in novel ways to solve highly targeted problems is a skill
The LLM outputs this out of the box? Where's the skill?
I don't believe the comparison to car tuners benefits your thesis here. The spectrum of people I know who tune their cars varies from utter idiots to professional engineers. You cannot state as a fact that anyone who does it has insight or even natural talent. The bar is so low that anyone who has enough money can do it (just like coding with LLMs). In fact one can say that most people are incompetent, and by tuning their cars to varying degrees they endanger themselves and others, enlarge their running/maintenance costs, lower their car's resale value, and harm the environment.
FWIW, there’s also an official frontend-design skill for CC [1]. A while back I incorporated some of the more relevant guidance from WCAG into it [2].
[1] - https://github.com/anthropics/claude-code/blob/main/plugins/...
It also doesn't solve the issue if somebody is browsing your site on a mobile phone where the extension might not even work properly.
WCAG is not difficult - but it does require some modicum of effort.
Stop shoving your wants on others when you can fix it yourself.
Just get some concrete and some lumber, and build that wheelchair ramp.
You can even hire a contractor to follow you around town all day building them as needed.
I hope you remember that well into your adult life.
Your hearing may be lost. Even if you could still read, the website doesn't offer an accurate transcription. You have to rely on someone else (or some other tech) to transcribe. You have to hope their hearing and language skills are good enough for an accurate transcription.
Your vision may be lost. Even if you could still hear, the website doesn't offer an accurate transcription. You have to rely on someone else (or some other tech) to transcribe. You have to hope their reading comprehension and language skills are good enough for an accurate transcription.
Your limbs may be lost. Some apps let you tab around. Some apps make it impossible to find a button until you hover your mouse. Some apps simply don't load unless you press some magic keystrokes. Good luck.
You brought this problem upon yourself, 30 years ago. You brought this problem upon others. People won't care about your problems. Why should they, when you didn't care about theirs?
> I find it bewildering that every thread sharing some project has a comment like this.
Accessibility is legally required and not difficult to add.
Would you deny service to black people? Islamic people? Gay people? Refusing to provide accessibility in your service is no different. You are actively discriminating in a way which could be illegal and certainly is unethical and amoral.
- Exploration: I am "vibe coding" to explore a domain, add many features, refactor the app over and over, as a real time exploration of the domain to see what works and what doesn't
- Specific Execution: I have a full design, a full idea, I've thought about architecture, we're making a plan and we're executing this extremely coherent vision
I've enjoyed using AI for both cases.
An attempt to detect AI design patterns in Show HN pages
Apr 20, 2026
When browsing Hacker News, I noticed that many Show HN projects now have a generic sterile feeling that tells me they are purely AI-generated. Initially I couldn’t tell what it was exactly, so I wondered if we could automatically quantify this subjective feeling by scoring 500 Show HN pages for AI design patterns.
Claude Code has led to a large increase in Show HN projects. So much, that the moderators of HN had to restrict Show HN submissions for new accounts.
Here is how the Show HN submissions increased over the last few years:
Update: dang pointed out that the March 2026 dip correlates with the rollout of /showlim, the view newer accounts now see.
That should give us plenty of pages to score for AI design patterns.
A designer recently told me that “colored left borders are almost as reliable a sign of AI-generated design as em-dashes for text”, so I started to notice them on many pages.
Then I asked some more designer friends what they think are common AI patterns. The answers can be roughly grouped into fonts, colors, layout quirks, and CSS patterns.
Fonts
Colors
Layout quirks
CSS patterns
A few examples from the Show HN submissions:
Badge above the Inter hero.
Same, different page.
Colored border on top.
Dead internet? An AI-generated outreach about my blog that includes a perfect example of an AI design pattern (colored left border).
Icon-topped feature card.
Gradient background + glassmorphism cards.
Now we can try to systematically score for these patterns by going through 500 of the latest Show HN submissions and scoring their landing pages against the list above.
Here is the scoring method:
This ultimately also leads to false positives, but my manual QA run verified it’s maybe 5-10%. If there is any interest in open sourcing the scoring code to replicate (and improve) the run or score your own site, let me know.
A single pattern doesn’t necessarily make a site AI-generated, so I grouped them into three tiers based on how many of the 15 patterns they trigger:
Heavy slop (5+ patterns) · 105 sites · 21% Mild (2–4) · 230 sites · 46% Clean (0–1) · 165 sites · 33%
Is this bad? Not really, just uninspired. After all, validating a business idea was never about fancy design, and before the AI era, everything looked like Bootstrap.
There is a difference between trying to craft your own design and just shipping with whatever defaults the LLMs output. And the same has been the case pre-LLM when using CSS/HTML templates.
I guess people will get back to crafting beautiful designs to stand out from the slop. On the other hand, I’m not sure how much design will still matter once AI agents are the primary users of the web.
This post is human-written, the scoring and analysis were AI-assisted.
the wheelchair is not built into the site, and only requires a few hooks or the odd helping hand to work.
mapping back to software, and especially websites, your user agent is your user agent. it should render websites in the way you want to see them, regardless of what colours the designer chose.
an AI accessibility browser is more like a wheel chair than a ramp
Bad analogy, as none of those traits require any accomodation in a website or app.
Not that I disagree with the premise. Almost everyone will eventually have trouble reading small, low contrast text or details on their phone or screen, if nothing else.
Now if only there were an ADA for website performance...
git worktrees as an example.
It's not even about age.
You can twist an ankle playing basketball and need accessibility features like ramps and grab bars.
You can get hit in the eye by a bit of debris when your toy drone crashes, and need help reading a screen while it heals.
People who don't think they need accessibility only have to wait. Everyone gets their turn.
More thinking isn’t a simple good thing. Given a limit to how much thought I can give any specific task, adding extra work may mean less where it’s most useful.
…right now, today. But they might consider “build a world for ‘old’ you”
https://www.w3.org/WAI/test-evaluate/preliminary
Another possibility (although I’ve never actually tried this myself) is an MCP server that someone built specifically to connect to Lighthouse, which includes accessibility testing as part of its benchmarks.