> [...]
> I've started asking clients a simple question when they bring it up. Not to be difficult, just to understand.
> [...]
> It's not about utility. It's not even really about the chatbot. It's about visibility, the fear of looking behind.
> [...]
> No pop-ups. No blinking corners. Just content, clear and immediate.
It’s been long enough that this might even have plausibly come from a human with LLM writing overrepresented in their brain rather than an LLM. But either way there’s this record-scratch feeling that I experience on each one of these, and (fittingly) it just completely knocks me out of the groove, requiring deliberate effort to resume reading.
And, I mean, none of these is even bad in isolation, but it sure feels like we’re due either a backlash where these patterns become underused even when appropriate, or them becoming so common they lose their power (is syntax subject to semantic bleaching?). Or perhaps both. Socioliguists are going to have a blast.
In this particular case the linked article is definitely AI generated.
Back in the day, websites could just put up an animated "under construction" gif.
Fedex has now a voice bot when you call and it is kind of good and fast. I mean faster than navigating their website. It picks up directly after some boilerplate. It can understand me.
With website chatbots we could have similar leaps if they are done well and have access to CRM/ERP etc. to actually help you.
Obviously it just a script embedded in the page, so it has not actual place in the design. So the effect, especially on mobile, is this dance of starting to read a page, have it obscured by annoying popups, and trying (and failing) to close the popup with the hidden 12x12 pixels x button.
Just like the entire ads market, it’s all forgery to drive up clicks so owners can say to the clients that there is interaction.
Don’t get me started on the recent YouTube ads on iPad that place a banner that sits on top of the video, hiding subtitles, and closing it is behind a menu that requires you to be a brain surgeon specialist in order to interact with, instead of clicking the ad itself. I currently have 15 tabs in safari for ads that I inadvertently clicked.
This sums up everything driving the tech sector right now. From execs at big tech to nobodies on X.
EDIT; if I think about the nature of it. The visibility fight is the decreasing attention with increasing channels and noise. Visibility tactics go to the extreme. And the fear of looking behind comes from the previous tech cycles and the thoughts around what if you had missed those? And maybe those with the most fear are the ones that did.
I was skeptical but it gets a 68 NPS from users, even if we do get the occasional "why are you investing in AI I hate it" coming through the feedback channel.
As ever, the issue is "what problem are you solving". If it's that you want more people to put their hand up and talk to you/order something, chatbots seem like a bad solution. If it's that you have a ton of complex docs that people have to read in order to implement and use your product, it's not the solution but it's probably part of a solution.
Your clients seem to have got what they wanted, or at least someone who has learned to write like one.
The consultants apparently had the bot load and fed it an immediate prompt which greeted the user. This was happening on every page load. Bad consultants, bad bot.
The op is a blog post. You’re talking about blog post writing. Maybe you just don’t like their style?
It’s also true llm second drafts are a thing.
And it’s true both can ‘record scratch’ you right out of attention.
As well as the now present trend as readers to be impatient and quickly bored.
And this criticism of writing style (for my take this article is perfectly readable)—what is the aim? Call for writers to perform some kind of disclosure? Because without a goal, it sounds like complaining you don’t like the soup.
> It's not about utility. It's not even really about the chatbot. It's about novelty of talking to a machine
Which of course doesn't connect to the rest of the article contents, because the AI doesn't have any intention in its writing.
They maintain such a consistent paragraph length that they're either a professional copyeditor or, as is clearly the case, are an LLM.
Humans deviate a lot more than this, they use run on sentences or lose the thread in their writing.
This blog however reads like every-other post on LinkedIn. Semi-professional tone, with a strong "You, Me" hook to most posts.
I encourage everyone to make an LLM-generated blog, don't post the articles anywhere, but generate one, to get a feeling for how these things write.
Because this is unmistakably LLM. I'd even go so far as to identify the model of these particular posts as ChatGPT.
Yet when we point this out, we're told it is "unmistakably human" and that we're rude for pointing it out.
https://adele.pages.casa/md/blog/the-joy-of-a-simple-life-wi...
The thing is, by now it doesn’t actually matter if AI or not AI or partly AI or whatever, because the record scratch is still there and still breaks my immersion. I could be oversensitive (I definitely am to some other English-language things, and also feel that others are to yet other things like em dashes), but it feels like there’s a new language/social-signalling thing now, and you may have to avoid it even if you’re not an LLM.
It's always been like this. I used to build websites in the 90s and it was exactly like that. It was also horrible. People who had no tech background whatsoever making decisions on which tech to use (PHP vs ASP vs ColdFusion, remember those?); overpaying agencies to make HTML "templates" that had to have round corners everywhere. Etc.
Not everything's great today, but it's a little less bad I think.
The whole corpus is in there, but the standard style is tuned for.
And people I read had better ability to not put in unneceasary random completely made up facts or illogical implications.
As for being dehumanizing, perhaps I did commit the sin of psychoanalysis at a distance here, but I’ve felt enough loose wires sticking out of my brain’s own language production apparatus that I don’t think pointing out the mechanistic aspects reduces anyone’s humanity.
For instance, nobody can edit their own writing until they forget what’s in it—that’s why any publishing pipeline needs editors, and preferably two layers of them, because the first one, who edits for style and grammar, consequently becomes incapable of spotting their own mechanical mistakes like typos, transposed or merged words, etc. Ever spotted a bug in a code-review tool that you’ve read and overlooked a dozen times in your editor? Why does a change in font or UI cause a presumably rational human being to become capable of drawing logical inferences they were not before? In either case, there seems to be a conclusion cache of sorts that we can’t flush and can’t disable, requiring these sorts of actually quite expensive hacks. I don’t think this makes us any less human, and it pays to be aware of your own imperfections. (Don’t merge your copy- and line editors into a single position, please?..)
As for syntactic patterns, I’ve quite often thought of a slick way to phrase things and then realized that I’d used it three times in as many sentences. On some occasions I’ve needed to literally grep every linking word in my writing to make sure I haven’t used a single specific one five times in a row. If you pay attention during meetings or presentations, you’ll notice that speakers (including me!) will very often reuse the question’s phrasing word for word regardless of how well it fits, without being aware of it in the slightest. (I’m now wondering if lawyers and witnesses train to avoid this.) Language production is stupidly taxing on the brain (or so I’ve heard), so the brain will absolutely take every possible shortcut whether we want it to or not.
Thus I expect that the priming effect I’m alleging can be very real even before getting into equally real intangibles like “taste”. I don’t think it dehumanizes anyone; you could say it dehumanizes everyone equally instead, but my point of view is that being aware of these mechanical realities of the mind is essential to competent writing (or thinking, or problem solving) in the same way that being aware of mechanical realities of the body is essential to competent dancing (or fighting, or doing sports). A bit of innocence lost is a fair trade for the wisdom gained.
(Not that I claim to be a particularly good writer.)
[1] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
There've been stylistic fads before LLMs where a thing, with results just as chalkboard-screech-inducing as the current one. That this one is just a button-push away does make it worse, though, because it proliferates so greedily.
Bad writing is bad writing, and writing like an LLM is writing like an LLM. We should be able to call this out. In fact, calling out the human responsibility in it is the very opposite of dehumanizing to me.
It feels like you're trying for a lazy gotcha, but the actual point here is something like "AI models often generate writing with specific noticeable characteristics that make it obviously AI output, and TFA is an instance of such writing, and this should be called out when possible"
Sure, call the style bad or even similar to LLMs, but there's no reason to believe the style came from LLMs. It existed before and people who used it before still exist and still use it now.
Hell, this person seems to be a web(site) developer, that's a very marketing-speak-heavy field. It's far morely likely that's where they "caught" thos style. It happened to me too back when I was still in it.
I'm not witch-hunting, there are just a lot of witches.
> explicitly saying it was either LLM generated or the person's style was influenced by consuming LLM content
We might disagree here, but if we're strict they did not say "either/or", especially not explicitly. They raised two possibilities, but didn't exclude others.
> there's no reason to believe the style came from LLMs
They say "might" and "plausibly". I think there's no belief there until you assume it.
And even if: It's not unlikely that a contemporary author's mind is influenced by the prevalent LLM style. We are influenced by what we read. This has been happening to everyone for ages, without anyone questioning the agency of writers. There's nothing wrong with suggesting like that could be the case here. It's entirely human.
I know it's easy for one's mind to jump to conclusions, but I am not a fan of taking that as far as accusing someone of "dehumanizing" others. Such an escalation should ideally cause a pause and a think, before pressing submit.
Tbh the whole smolweb concept by this person seemed kinda weird right when I discovered it was a thing. It seems to not really be a thing but the person is really trying to convince you that it is
2026-03-14 12:55
It always starts the same way. The client pulls out their phone mid-meeting, navigates to a competitor's website, and holds the screen up like evidence. "You see? They have one of those." A little bubble. Bottom right corner. Blinking...
For years, that gesture was about carousels. Every homepage had to have one, big, slow, full of stock photos that nobody asked for. I built dozens of them. They spun. They faded. They slid. Visitors ignored them completely, scrolled past in half a second, and went looking for the phone number.
Then the trend quietly died, as trends do. Not because anyone decided carousels were bad. Just because something newer came along to copy.
Cookie consent banners came next. Every site needed one, even the ones with no cookies whatsoever. Then Google Tag Manager, even for clients who never once opened an analytics report. I asked one of them, eighteen months after launch, if he'd ever looked at the traffic stats. He hadn't. He didn't even remember the login.
Now it's the chatbot.
I've started asking clients a simple question when they bring it up. Not to be difficult, just to understand.
"Do you actually use chatbots when you visit other websites?"
There's usually a pause. Then a laugh.
No, not really. They close them immediately. They find them annoying. Half the time they answer something completely unrelated to the question. Once, a client told me about a competitor's chatbot that confidently gave out wrong opening hours for months. He thought it was hilarious. And yet: "but we should have one, right?"
That's the moment I find both fascinating and exhausting.
It's not about utility. It's not even really about the chatbot. It's about visibility, the fear of looking behind. A website without a chatbot in 2026 risks feeling unfinished, like something's missing. Even if what's missing is a half-broken widget that most visitors dismiss in three seconds. The chatbot has become a social signal, not a tool. A way of saying: we're keeping up.
I've tried the opposite approach. When a client mentions the chatbot, I'll sometimes open a few smolweb sites, fast, minimal, readable, calm. No pop-ups. No blinking corners. Just content, clear and immediate.
Their eyes change. "Oh, that loads fast." "That's easy to read." "I like that."
And they mean it. Genuinely.
Then I ask if they'd want something like that.
"Well... but it looks a bit simple, doesn't it?"
Simple is the word that keeps coming up. And I've learned that when a client says simple, they don't mean easy to use. They mean not impressive enough. They mean what will people think. A lean, fast website doesn't look like it cost anything. It doesn't signal effort. It doesn't say: we take this seriously.
The real irony is that building something genuinely simple, something that loads instantly and says exactly what it needs to say and nothing more, is often harder than bolting on a chatbot. But that's invisible work. Nobody sees the restraint.
I don't have a solution to offer here. I'm not going to end this with a tidy list of tips for convincing clients to embrace the smolweb. That's not how it works, and pretending otherwise would be its own kind of dishonesty.
The pressure isn't really coming from clients anyway. It's coming from the web itself, from a decade of bloated pages, dark patterns, and feature arms races that quietly redefined what a "real" website looks like. Clients are just reading the room. The room is wrong, but they're not imagining it.
The shift might come from users, not decision-makers. It might come when enough people notice that the fast, calm site was easier to use. That they actually found what they came for. That they didn't have to close three things before reading a single line.
Maybe we plant the seed and wait.
In the meantime, the chatbot is live. It sits in the corner of my client's homepage, blinking patiently. It doesn't know the opening hours. It doesn't know the prices. It doesn't really know anything.
But it's there. Just like everyone else's.
You encounter the same behaviour? Share it with me on the Fediverse