Well, this is very interesting, because I'm a native English speaker that studied writing in university, and the deeper I got into the world of literature, the further I was pushed towards simpler language and shorter sentences. It's all Hemingway now, and if I spot an adverb or, lord forbid, a "proceeded to," I feel the pain in my bones.
The way ChatGPT writes drives me insane. As for the author, clearly they're very good, but I prefer a much simpler style. I feel like the big boy SAT words should pop out of the page unaccompanied, just one per page at most.
> This style has a history, of course, a history far older than the microchip: It is a direct linguistic descendant of the British Empire. The English we were taught was not the fluid, evolving language of modern-day London or California, filled with slang and convenient abbreviations. It was the Queen's English, the language of the colonial administrator, the missionary, the headmaster. It was the language of the Bible, of Shakespeare, of the law. It was a tool of power, and we were taught to wield it with precision. Mastering its formal cadences, its slightly archaic vocabulary, its rigid grammatical structures, was not just about passing an exam.
> It was a signal. It was proof that you were educated, that you were civilised, that you were ready to take your place in the order of things.
Much of writing style is not about conveying meaning but conveying the author's identity. And much of that is about matching the fashion of the group you want to be a member of.
Fashion tends to go through cycles because once the less prestigious group becomes sufficiently skilled at emulating the prestige style, the prestigious need a new fashion to distinguish themselves. And if the emulated style is ostentatious and flowery, then the new prestige style will be the opposite.
Aping Hemingway's writing style is in a lot of ways like $1,000 ripped jeans. It sort of says "I can look poor because I'm so rich I don't even have to bother trying to look rich."
(I agree, of course, that there is a lot to be said for clean, spare prose. But writing without adverbs doesn't mean one necessarily has the clarity of thought of Hemingway. For many, it's just the way you write so that everyone knows you got educated in a place that told you to write that way.)
Beyond these surface level tells though, anyone who's read a lot of both AI-unassisted human writing as well as AI output should be able to pick up on the large amount of subtler cues that are present partly because they're harder to describe (so it's harder to RLHF LLMs in the human direction).
But even today when it's not too hard to sniff out AI writing, it's quite scary to me how bad many (most?) people's chatbot detection senses are, as indicated by this article. Thinking that human writing is LLM is a false positive which is bad but not catastrophic, but the opposite seems much worse. The long term social impact, being "post-truth", seems poised to be what people have been raving / warning about for years w.r.t other tech like the internet.
Today feels like the equivalent of WW1 for information warfare, society has been caught with its pants down by the speed of innovation.
I just saw someone today that multiple people accused of using ChatGPT, but their post was one solid block of text and had multiple grammar errors. But they used something similar to the way ChatGPT speaks, so they got accused of it and the accusers got massive upvotes.
Some people are perhaps overly focussed on superficial things like em-dashes. The real tells for ChatGPT writing are more subtle -- a tendency towards hyperboly (it's not A, it's [florid restatment of essentially A] B!), a certain kind of rhythym, and frequently a kind of hard to describe "emptiness" of claims.
(LLMs can write in mang styles, but this is the sort of "kid filling out the essay word count" style you get in chatgpt etc by default.)
The formal part resonates, because most non-native english speaker learnt it at school, which teaches you literary english rather than day-to-day english. And this holds for most foreign languages learnt in this context: you write prose, essays, three-part prose with an introduction and a conclusion. I've got the same kind of education in france, though years of working in IT gave me a more "american" english style: straight to the point and short, with a simpler vocabulary for everyday use.
As for whether your writing is ChatGPT: it's definitely not. What those "AI bounty hunters" would miss in such an essay: there is no fluff. Yes, the sentences may use the "three points" classical method, but they don't stick out like a sore thumb - I would not have noticed should the author had not mentioned it. This does not feel like filling. Usually with AI articles, I find myself skipping more than half of each paragraph, due to the information density - just give me the prompt. This article got me reading every single word. Can we call this vibe reading?
I also love and use em-dashes regularly. ChatGPT writes like me.
e.g. > [...] and there is - in my observational opinion - a rather dark and insidious slant to it
Let's leave it at "insidious" and "in my opinion". Or drop "in my opinion" entirely, since it goes without saying.
Just take one dip and end it.
Unfortunately I think posts like this only seem to detract from valid criticisms. There is an actual ongoing epidemic of AI-generated content on the internet, and it is perfectly valid for people to be upset about this. I don't use the internet to be fed an endless stream of zero-effort slop that will make me feel good. I want real content produced by real people; yet posts like OP only serve to muddy the waters when it comes to these critiques. They latch onto opinions of random internet bottom-feeders (a dash now indicates ChatGPT? Seriously?), and try to minimise the broader skepticism against AI content.
I wonder whether people like the Author will regret their stance once sufficient amount of people are indoctrinated and their content becomes irrelevant. Why would they read anything you have to say if the magic writing machine can keep shitting out content tailored for them 24/7?
According to russian language wikipedia (https://ru.wikipedia.org/wiki/%D0%94%D0%BE%D0%BA%D0%B0%D0%B7...) the original tale go out to famous Persian poet Rumi from XII century, which just makes me tickled pink about how awesome language is.
They have to actually read material, and not just use the structure as a proxy for ability.
The exact same problem exists with writing. In fact, this problem seems to exist across all fields: science, for example, is filled with people who have never done a groundbreaking study, presented a new idea, or solved an unsolved problem. These people and their jobs are so common that the education system orients itself to teach to them rather than anyone else. In the same way, an education in literature focused on the more likely traits you’ll need to get a job: hitting deadlines, following the expected story structure, etc etc.
Having confined ourselves to a tiny little box, can we really be surprised that we’re so easy to imitate?
The models mostly say "mat".
Just recently I was amazed with how good text produced by Gemini 3 Pro in Thinking mode is. It feels like a big improvement, again.
But we also have to honest and accept that nowadays using a certain kind of vocabulary or paragraph structure will make people think that that text was written by AI.
‘Striding’ is ‘purposeful’; ‘trudging’ expresses ‘weariness’; ‘ambling’ implies ‘nonchalance’.
Good verb choice reduces adverb dependence.
I think the only solution to this is, people should simply not question AI usage. Pretence is everywhere. Face makeup, dress, the way you speak, your forced smile...
Besides, of course what people write will sound as LLMs, since LLMs are trained on what we've been writing on the internet... For us who've been lucky and written a lot and are more represented in the dataset, the writings of LLMs will be closer to how we already wrote, but then of course we get the blame for sounding like LLMs, because apparently people don't understand that LLMs were trained on texts written by humans...
Kenya writes like the British taught before they left, and necessarily they didn't speak or write how they did.
All the toil of word-smithing to receive such an ugly reward, convincing new readers that you are lazy. What a world we live in.
And guess what, when you revise something to be more structured and you do it in one sitting, your writing style naturally gravitates towards the stuff LLMs tend to churn out, even if with less bullet points and em dashes (which, incidentally, iOS/macOS adds for me automatically even if I am a double-dash person).
That's just sad. I really feel for this author.
Both aim at using an English that is safe, controlled and policed for fear of negative evaluation.
Update: To illustrate this, here's a comparison of a paragraph from this article:
> It is a new frontier of the same old struggle: The struggle to be seen, to be understood, to be granted the same presumption of humanity that is afforded so easily to others. My writing is not a product of a machine. It is a product of my history. It is the echo of a colonial legacy, the result of a rigorous education, and a testament to the effort required to master the official language of my own country.
And ChatGPT's "improvement":
> This is a new frontier of an old struggle: the struggle to be seen, to be understood, to be granted the easy presumption of humanity that others receive without question. My writing is not the product of a machine. It is the product of history—my history. It carries the echo of a colonial legacy, bears the imprint of a rigorous education, and stands as evidence of the labor required to master the official language of my own country.
Yes, there's an additional em-dash, but what stands out to me more is the grandiosity. Though I have to admit, it's closer than I would have thought before trying it out; maybe the author does have a point.
His responses in Zoom Calls were the same mechanical and sounds like AI generated. I even checked one of his responses in WhatsApp if it's AI by asking the Meta AI whether it's AI written, and Meta AI also agreed that it's AI written and gave points to why it believes this message was AI written.
When I showed the response to the colleague he swore that he was not using ant AI to write his responses. I believe after he said to me it was not AI written. And now reading this I can imagine that it's not an isolated experience.
[0] https://www.theverge.com/features/23764584/ai-artificial-int...
Some things I've learned/realized from this thread:
1. You can make an em-dash on Macs using -- or a keyboard shortcut
2. On Windows you can do something like Alt + 0151 which shows why I have never done it on purpose... (my first ever —)
3. Other people might have em-dashes on their keyboard?
I still think it's a relatively good marker for ChatGPT-generated-text iff you are looking at text that probably doesn't apply to the above situations (give me more if you think of them), but I will keep in mind in the future that it's not a guarantee and that people do not have the exact same computer setup as me. Always good to remember that. I still do the double space after the end of a sentence after all.
> You spend a lifetime mastering a language, adhering to its formal rules with greater diligence than most native speakers, and for this, a machine built an ocean away calls you a fake.
This is :
> humanity is now defined by the presence of casual errors, American-centric colloquialisms, and a certain informal, conversational rhythm
And once you start noticing the 'threes', it's fun also.
Because while people OBVIOUSLY use dashes in writing, humans usually fell back on using the (technically incorrect) hyphen aka the "minus symbol" - because thats whats available on the keyboards and basically no one will care.
Seems like, in the biggest game of telephone called the internet, this has devolved into "using any form of dash = AI".
Great.
LLMs - like all tools - reduce redundant & repetitive work. In the case of LLMs it’s now easy to generate cookie cutter prose. Which raises the bar for truly saying something original. To say something original now, you must also put in the work to say it in an original way. In particular by cutting words and rephrasing even more aggressively, which saves your reader time and can take their thinking in new directions.
Change is a constant, and good changes tend to gain mass adoption. Our ancestors survived because they adapted.
Perplexity gauges how predictable a text is. If I start a sentence, "The cat sat on the...", your brain, and the AI, will predict the word "floor."
No. No no no. The next word is "mat"!
Earlier today I stumbled upon a blog post that started with a sentence that was obviously written by someone with a slavic background (most writers from other language families create certain grammatical patterns when writing in another language, e.g. German is also quite typical). My first thought was "great, this is most likely not written by a LLM".
I don't really understand the aversion some people have to the use of LLMs to generate or refine written communication. It seems trigger the "that's cheating!" outrage impulse.
For sure he describes an education in English that seems misguided and showy. And I get the context - if you don't show off in your English, you'll never aspire to the status of an Englishman. But doggedly sticking to anyone's "rules of good writing" never results in good writing. And I don't think that's what the author is doing, if only because he is writing about the limitations of what he was taught!
So idk maybe he does write like ChatGPT in other contexts? But not on this evidence.
I have seen people use "you're using AI" as a lazy dismissal of someone else's writing, for whatever reasons. That usually tells you more about the person saying it than the writing though.
We will all soon write and talk like ChatGPT. Kids growing up asking ChatGPT for homework help, people use it for therapy, to resumes, for CVs, for their imaginary romantic "friends", asking every day questions from the search engine they'll get some LLM response. After some time you'll find yourself chatting with your relative or a coworker over coffee and instead of hearing, "lol, Jim, that's bullshit" you'll hear something like "you're absolutely right, here let me show you a bulleted list why this is the case...". Even more scarier, you'll soon hear yourself say that to someone, as well.
I'm sure there's some voice actor out there who can't get work because they sound too similar to the generated voices that appear in TikTok videos.
Seeing a project basically wrapping 100 lines of code with a novel length README ala 'emoticon how does it compare to.. emoticon'-bla bla really puts me off.
- Do not confuse 'night' with 'evening'.
- This office spells it 'programme'.
- Hotels are 'kept', not 'run'.
- Dead men do not leave 'wives', but they may leave 'widows'.
- 'Very' is a word often used without discrimination. It is not difficult to express the same meaning when it is eliminated.
- The relative pronoun 'that' is used about three times superfluously to the one time that it helps the sense.
- Do not write 'this city' when you mean Chicago.
AI companies and some of their product users relentlessly exploit the communication systems we've painstakingly built up since 1993. We (both readers and writers) shouldn't be required to individually adapt to this exploitation. We should simply stop it.
And yes, I believe that the notion this exploitation is unstoppable and inevitable is just crude propaganda. This isn't all that different from the emergence of email spam. One way or the other this will eventually be resolved. What I don't know is whether this will be resolved in a way that actually benefits our society as a whole.
Security guards at tech company offices are the only ones who wear suits, presumably because it's a mandated uniform, not by choice.
… though, yes, in average hands a “proceeded to”, and most of the quoted phrases, are garbage. Drilling the average student on trying to make their language superficially “smarter” is a comically bad idea, and is indeed the opposite of what almost all of them need.
> strode purposefully
My wife (a writer) has noticed that fanfic and (many, anyway—plus, I mean, big overlap between these two groups) romance authors loooove this in particular, for whatever reason. Everyone “strides” everywhere. No one can just fucking walk, ever, and it’s always “strode”. It’s a major tell for a certain flavor of amateur.
Reading though my old self-reviews it basically is exactly like your examples. Making sentences longer just to make your story more interesting.
Because at the end your promotion wasn't about what you achieved. It was about your story and how 7 people you didn't know voted on it.
Why did you say you were "pushed towards" simpler language instead of "I liked it more"?
Why did you say "I feel the pain in my bones" and "drives me insane" instead of "I dislike it"?
Why did you say "the big boy SAT words should pop out of the page unaccompanied" instead of "there should only be one big word per page"?
Perhaps flowery language expands your ability to express yourself?
I'd characterise Americans as less pretentious and more straight talking.
This kind flowery language is typical (or symptomatic depending on diagnosis) of how English people actually used to speak and write.
The average English vocabulary has dwindled noticeably in my life.
Then I moved to the US and noticed that even the books were sort of written in a way that required no extra effort. The English I learned while playing RPGs (with no speech at the time) was enough to read most books from the library and a dictionary was only needed occasionally. And everyone basically just knew the same set of words, youth and adults alike. I also noticed that US English has a distinct tendency of making up new words that are simpler and more intuitive than the original expressions. It turns things into verbs. This is why people Google, Tweet and Vibe.
Then I went to an Engineering College, and it teaches us to distill everything into it's simpler fundamental components. I like it, and I now want people to be as direct as possible.
As a non native english speaker, I've always had to speak and write better than native speakers, and always had to tolerate the "You speak/write really well, where are you from?". Today they no longer ask, AI is their answer and they judge accordingly.
Then you start learning more & more abstraction (classes, patterns, monads...).
In the end you strive to write simple code, just like at the beginning.
Language is like clothing.
Those with no taste - but enough money - will dress in gaudy ways to show off their wealth. The clothing is merely a vector for this purpose. They won’t use a piece of jewelry only if it contributes to the ensemble. Oh, no. They’ll drape themselves with gold chains and festoon their fingers with chunky diamond rings. Brand names will litter their clothing. The composition will lack intelligibility, cohesiveness, and proportion. It will be ugly.
By analogy, those with no taste - but enough vocabulary - will use words in flashy ways to show off their knowledge. Language is merely a vector for this purpose. They won’t use a word only if it contributes to the prose. Oh, no. They’ll drape their phrases with unnecessarily unusual terms and festoon their sentences with clumsy grammar. Obfuscation, rather than clarity, will define their writing. The composition will lack intelligibility, cohesiveness, and proportion. It will be ugly.
As you can see, the first difference is one of purpose: the vulgarian aims for the wrong thing.
You might also say that the vulgarian also lacks a kind of temperance in speech.
(1) writing to communicate ideas, in which case simpler is almost always better. There's something hypnotic about simple writing (e.g. Paul Graham's essays) where information just flows frictionlessly into your head.
(2) writing as a form of self-expression, in which case flowery and artistic prose is preferred.
Here's a good David Foster Wallace quote in his interview with Bryan Garner:
> "there’s a real difference between writing where you’re communicating to somebody, the same way I’m trying to communicate with you, versus writing that’s almost a well-structured diary entry where the point is [singing] “This is me, this is me!” and it’s going out into the world.
I'm the complete opposite. Hemingway ruined writing styles (and I have a pet theory that his, and Plain English, short sentences also helped reduce literacy in the long run in a similar way TikTok ruins attention spans). I'm a 19th century reader at heart. Give me Melville, Eliot, Hawthorne, though keep your Dickens.
I have a confession to make: I didn't think lulcat speak was funny, even at the time.
It's pretty annoying and once you catch them doing it, you can't stop.
Outrage mills mill outrage. If it wasn't this, it would be something else. The fact that the charge resonated is notable. But the fact that it exists is not.
Or rather by the slowness of regulation and enforcement in the face of blatant copyright violation.
We've seen this before, for example with YouTube, which became the go-to place for videos by allowing copyrighted material to be uploaded and hosted en masse, and then a company that was already a search engine monopoly was somehow allowed to acquire YouTube, thereby extending and reinforcing Google's monopolization of the web.
https://www.theguardian.com/technology/2024/apr/16/techscape...
They said nigerian but there may be a common way English is taught in the entire area. Maybe the article author will chip in.
> ChatGPT is designed to write well
If you define well as overly verbose, avoiding anything that could be considered controversial, and generally sycophantic but bland soulless corporate speak, yes.
> there is - in my observational opinion - a rather dark and insidious slant to it
That feels too authentic and personal to be any of the current generation of LLMs.
Writing well is about communicating ideas effectively to other humans. To be fair, throughout linguistic history it was easier to appeal to an audience’s innate sense of authority by “sounding smart”. Actually being smart in using the written word to hone the sharpness of a penetrating idea is not particularly evident in LLM’s to date.
It gives a vibe like a car salesman and I really dislike it and personally I consider it a very bad writing style for this very reason.
I do very much prefer LLMs that don't appear to be trained on such data or try to word questions a lot more to have more sane writing styles.
That being said it also reminds me of journalistic articles that feel like the person just tried to reach some quota using up a lot of grand words to say nothing. In my country of residence the biggest medium (a public one) has certain sections that are written exactly like that. Luckily these are labeled. It's the section that is a bit more general, not just news and a bit more "artsy" and I know that their content is largely meaningless and untrue. Usually it's enough to click on the source link or find the source yourself to see it says something completely different. Or it's a topic that one knows about. So there even are multiple layers to being "like LLMs".
The fact that people are taught to write that way outside of marketing or something surprises me.
That being said, this is just my general genuine dislike of this writing style. How an LLM writes is up to a lot of things, also how you engage with it. To some degree they copy your own style, because of how they work. But for generic things there is always that "marketing talk" which I always assumed is simply because the internet/social media is littered with ads.
Are Kenyans really taught to write that way?
But yeah, I definitely find mild grammatical quirks expected from English as a foreign language speakers a positive these days, because the writing appears to reflects their actual thoughts and actual fluency.
Omitting articles? To me, that has always signaled "this will be an interesting and enlightening read, although terse and in need of careful thought." I've found sites from that part of the Internet to be very useful for highly technical and obscure topics.
(check-mark emoji) Add more emoji — humans love them! (red x emoji) Avoid negative words like "bullshit" and "scarier."
(thumbs-up emoji) Before long you'll get past the human feedback of reinforcement learning! (smiley-face)
Now, please, divulge your secret--your verbal nectar, if you wish--so that I too can flower in your tounge!
Beyond the stylistics bits "history—my history" which I don't really mind what make it bad to me is detachment from reality.
As a reader, I persistently feel like I just zoned out. I didn't. It's just the mind responding to having absorbed zero information despite reading a lot of–at face value–text that seems like it was written with purpose.
We were also taught in Content Lab at uni to prefer short, punchy sentences. No passive voice, etc. So academia is in some ways pushing that same style of writing.
ChatGPT :|
ChatGPT (japan) XD
Perhaps the US-centric "optimization" of English is to blame here, since it is so obvious in regular US media we all consume across the planet, and is likely the contrasting style.
Its overuse is definitely a marker of either AI or a poorly written body of text. In my opinion, if you have to rely on excessive parentheticals, then you are usually off restructuring your sentences to flow more clearly.
When I copy and pasted them in it failed obviously so... yeah. If you have terminal commands that use `--` don't copy+paste them out of notepad.
Eg, I was typing Alt-0151 and Alt-0150 (en-dash) on the reg in my middle school and high school essays along with in AIM. While some of my classmates were probably typing double hyphens, my group of friends were using the keyboard shortcuts, so I am now learning from this "detect an LLM" faze that there's a vocal group of people who do not share this experience or perspective of human communication. And that having a mother who worked in technical publishing who insisted I use the correct punctuation rather than two hyphens was not part of everyone's childhood.
Humanity has always been about errors.
Authenticity, wether it is sincere or not, can become an incredibly powerful force now and then. Regardless of AI, the communication style in tech, and overall, was bound to go back to basics after the hacker culture of the post-dotcom era morphed, in the 2010s, into the corporatism they were fighting to begin with, yet again.
I would not want to be an artist in the current environment, it’s total chaos.
Interesting, because he failed me too just because I use Firefox. Have you been told about the article or it actually worked with your screen reader software?
I suppose I don’t mind people using AI voices if they have a thick accent or are shy about their voice, but if I’m watching a video and clock the voice as AI (usually because the tone is professional but has no expression and then the speaker mispronounces a common word or acronym) it does make me start to wonder if the script is AI. There are a lot of people churning out tutorials that seem useful at first but turn out to have no content (“draw the rest of the owl” type stuff) because they asked AI to create a tutorial for something and didn’t edit or reprompt based on the output. The video essay world is also starting to get hit pretty hard, to the point that I’m less willing than ever to watch content unless I already know the creator’s work.
[0] Shameless plug: https://youtu.be/PGiTkkMOfiw
Nigeria and Kenya are two very different regions with different spheres of business. I don't know, but I wouldn't expect the English to overlap that much.
I’m highly skeptical. At one point the author tries to argue this local pedagogy is downstream of “The Queen’s English” & British imperial tradition, but modern LLM-speak is a couple orders of magnitude closer in the vector space to LinkedIn clout-chasing than anything from that world.
The main difference in the author's writing to LLM I see is that the flourish and the structure mentioned is used meaningfully, they circle around a bit too much for my taste but it's not nearly as boring as reading ai slop which usually stretch a simple idea over several paragraphs
I will never understand why some people apparently think asking a chat bot whether text was written by a chat bot is a reasonable approach to determining whether text was written by a chat bot.
This sucks, but it needs to be done in education, and/or at least in areas where good writing and effective communication is considered important. Good grades need to be awarded only to writing that exceeds the quality and/or personality of a chat-bot, because, otherwise, the degree is being awarded to a person who is no more useful than a clumsy tool.
And I don't mean avoiding superficialities like the em-dash: I mean the bland over-verbosity and other systemic tells—or rather, smells—of AI slop.
I can't blame others though- I was looking at notes I wrote in 2019 and even that gave me a flavor of looking like a ChatGPT wrote it. I use the word "delve" and "not just X but also Y often, according to my Obsidian. I've taken to inserting the occasional spelling mistake or Unorthodox Patterns of Writing(tm), even when I would not otherwise.
It's a lot easier to get LLMs to adhere to good writing guides than it is to get them to create something informative and useful. I like to think my notes and writing are informative and useful.
How dare they.
(And as #9 on the leaderboard, I feel the need to defend myself!)
In comparison, I can sort of confidently ask GPT-5.1/2 to say "revise this but be terse and concise about it" and arrive at something that is more structured that what I input but preserves most of my writing style and doesn't bore the reader.
It would be ironic and terrific if AI causes ordinary Americans to devote more time to evaluting their sources.
I hope ChatGPT starts writing only short sentences.
Punchy one-liners.
One thought per line.
So marketers finally realize this does not work.
And stop sending me junk emails written like this.
"No weasel words!"
Social media artists, gallery artists and artists in the industry (I mean people who work for big game/film studios, not industrial designers) are very different groups. Social media artists are having it the hardest.
The voice... idk, I don't hear a lot of voices where I think or know of was generated so I'm not qualified to say but it didn't give me generated vibes. There's no glitches or mispronounciations that I'd expect to pop up at least a few times across 15 minutes of material
"He strode up to Helen and asked, 'What are you doing?'"
"He sidled up to Helen and asked, 'What are you doing?'"
"He tromped up to Helen and asked, 'What are you doing?'"
Each of those sentences conveys as slightly different action. You can almost imagine the person's face has a different expression in each version.
Yes, I hate it when amateurs just search/replace by thesaurus. But I think different words have different connotations, even if they mean roughly the same thing. Writing would be poorer if we only ever used "walk".
For instance Mark Twain is basically full of endless amazing quotes with lovely nuance, yet in contemporary times how many people would miss the meaning in a statement like "Prosperity is the best protector of principle"? I can already see people raging over his statement, taken at face value. Downvote the classist!
I mean, it seems like it could work if you get to follow it up with a "de-education" step. Phase 1: force them to widen their vocabulary by using as much of it as possible. Phase 2: teach them which words are actually appropriate to use.
The only "stride" I know relates to the gap betweeb heterogeneous elements in a contiguous array
- Barely literate native English speakers not comprehending even minimally sophisticated grammatical constructs.
- Windows-centric people not understanding that you can trivially type em-dash (well, en-dash, but people don’t understand the difference either) on Mac by typing - twice.
How do you like that, Mr. Rat
Thought the Cat.
All we can hope is for a local to show up and explain.
There’s this conversation that keeps happening, and… ok. Ok. This is the post that finally set me off.
[

The replies pointed out something crucial, something that makes this whole debate even more infuriating: Some of us actually had to learn English.
Let me explain.
The first incident - and perhaps what I should have taken as a sign of times to come - was earlier this year. I received a reply to a proposal I had laboured over for days.
“This is a really solid base, but could you do a rewrite with a more human touch? It sounds a little like it was written by ChatGPT.”
Human touch. Human touch. I’ll give you human touch, you—
[

Sorry. The intrusive thoughts were having a moment there. I’m back, I’m back.
Here’s the thing: More and more writers seem to be getting these sort of responses, and there is - in my observational opinion - a rather dark and insidious slant to it. Stay with me for a moment, and I’ll get back to that.
Part of the irony is of the flavour that would make our ancestors chuckle. Because the accuser, in their own way, wasn't entirely wrong. My writing does share some DNA with the output of a large language model. We both have a tendency towards structured, balanced sentences. We both have a fondness for transitional phrases to ensure the logical flow is never in doubt. We both deploy the occasional (and now apparently incriminating) hyphen or semi-colon or em-dash to connect related thoughts with a touch more elegance than a simple full stop.
With a calmer mind, I became a little more gracious. The error in their judgment wasn't in the what, but in the why. They had mistaken the origin story.
---
I am a writer. A writer who also happens to be Kenyan. And I have come to this thesis statement: I don't write like ChatGPT. ChatGPT, in its strange, disembodied, globally-sourced way, writes like me. Or, more accurately, it writes like the millions of us who were pushed through a very particular educational and societal pipeline, a pipeline deliberately designed to sandpaper away ambiguity, and forge our thoughts into a very specific, very formal, and very impressive shape.
There’s a growing community (cult?) of self-proclaimed AI detectives, who have designed and detailed what they consider tells, and armed their followers with a checklist of robotic tells. Does a piece of text use words like ‘furthermore’, ‘moreover’, ‘consequently’, ‘otherwise’ or ‘thusly’? Does it build its arguments using perfectly parallel structures, such as the classic “It is not only X, but also Y”? Does it arrange its key points into neat, logical triplets for maximum rhetorical impact?
To these detectives of digital inauthenticity, I say: Friend, welcome to a typical Tuesday in a Kenyan classroom, boardroom, or intra-office Teams chat. The very things you identify as the fingerprints of the machine are, in fact, the fossil records of our education.
---
The bedrock of my writing style was not programmed in Silicon Valley. It was forged in the high-pressure crucible of the Kenya Certificate of Primary Education, or KCPE. For my generation, and the ones that followed, the English Composition paper - and its Kiswahili equivalent, Insha - was not just a test; it was a rite of passage. It was one built up to be a make-or-break moment in life: A forty-minute, high-stakes sprint where your entire future, your admission to a good national high school, and by extension, your life’s trajectory, could pivot on your ability to deploy a rich vocabulary and a sophisticated sentence structure under immense, suffocating pressure.
And that one moment wasn’t an aberration. Every English class and every homework assignment for three years prior (and more, it could be argued) was specifically designed to get the teacher marking your composition to award you a mark as close as possible to the maximum of 40. Scored a 38/40? Beloved, whoever is marking your paper has deemed you worthy of breathing the same air as Malkiat Singh.
It’s a memory that’s hard to write over - the prompt, written in the looping, immaculate cursive of the teacher on the blackboard: “A holiday I will never forget.” Or perhaps it was one of those that demanded that you end the entire composition with, “…and that’s when I woke up and realised it was just a dream.” The topic was almost irrelevant. The real test was the execution.
There were unspoken rules, commandments passed down from teacher to student, year after year. The first commandment? Thou shalt begin with a proverb or a powerful opening statement. “Haste makes waste,” we would write, before launching into a tale about rushing to the market and forgetting the money. The second? Thou shalt demonstrate a wide vocabulary. You didn’t just ‘walk’; you ‘strode purposefully’, ‘trudged wearily’, or ‘ambled nonchalantly’. You didn’t just ‘see’ a thing; you ‘beheld a magnificent spectacle’. Our exercise books were filled with lists of these “wow words,” their synonyms and antonyms drilled into us like multiplication tables.
The third, and perhaps most important commandment, was that of structure. An essay had to be a perfect edifice. The introduction was the foundation, the body was the walls, and the conclusion was the roof, neatly summarising the moral of the story and, if you were clever, circling back to the introductory proverb to create a satisfying, if predictable, loop. We were taught to build our paragraphs around a strong topic sentence. We were taught the sin of the sentence fragment and the virtue of the compound-complex sentence. Our teachers, armed with red pens that bled judgment all over our pages, were our original algorithms, training us on a specific model of "good" writing. Our model compositions, the perfect essays from past students read aloud to the class, were our training data.
And that’s a culture that is carried over into high school, where set books must be memorised, and arguments for or against certain statements must be elaborately made for you do reach and surpass the English literature passmark. You could literally recite Shakespeare in the middle of the night right before any exam.
---
This style has a history, of course, a history far older than the microchip: It is a direct linguistic descendant of the British Empire. The English we were taught was not the fluid, evolving language of modern-day London or California, filled with slang and convenient abbreviations. It was the Queen's English, the language of the colonial administrator, the missionary, the headmaster. It was the language of the Bible, of Shakespeare, of the law. It was a tool of power, and we were taught to wield it with precision. Mastering its formal cadences, its slightly archaic vocabulary, its rigid grammatical structures, was not just about passing an exam. It was a signal. It was proof that you were educated, that you were civilised, that you were ready to take your place in the order of things.
(I’ve tried to resist it, but I can’t help myself, and perhaps you’ve already picked up on it: See the threes?)
In post-independence Kenya, this language didn't disappear. It simply changed its function. It became the official language, the language of opportunity, the new marker of class and sophistication. The Charles Njonjos and Tom Mboyas of their time used it to stamp their status in society. The ability to speak and write this formal, "correct" English separated the haves from the have-nots. It was the key that unlocked the doors to university, to a corporate job, to a life beyond the village. The educational system, therefore, doubled down on teaching it, preserving it in an almost perfect state, like a museum piece.
And right there is the punchline to this long, historical joke. An “AI”, a large language model, is trained on a vast corpus of text that is overwhelmingly formal. It learns from books published over the last two centuries. It learns from academic papers, from encyclopaedias, from legal documents, from the entire archive of structured human knowledge. It learns to associate intelligence and authority with grammatical precision and logical structure.
The machine, in its quest to sound authoritative, ended up sounding like a KCPE graduate who scored an 'A' in English Composition. It accidentally replicated the linguistic ghost of the British Empire.
---
Now, the world, through its new and profoundly flawed technological lens, looks at the result of our very human, very analogue training and calls it artificial. The insult is sharpened by the very tools used to enforce it. The so-called AI detectors are not neutral arbiters of truth. They are, themselves, products of a specific cultural and technical worldview.
These detectors, as I understand them, often work by measuring two key things: ‘Perplexity’ and ‘burstiness’. Perplexity gauges how predictable a text is. If I start a sentence, "The cat sat on the...", your brain, and the AI, will predict the word "floor." A text filled with such predictable phrases has low perplexity and is deemed "robotic." Burstiness measures the variation in sentence length and structure. Natural human speech and writing are perceived to be ‘bursty’ - a short, punchy sentence, followed by a long, meandering one, then another short one. LLMs, at least in their earlier forms, tended to write with a more uniform sentence length, a monotonous rhythm that lacked this human burstiness.
Now, consider our ‘training’ again. We were taught to be clear, logical, and, in a way, predictable. Our sentence structures were meant to be consistent and balanced. We were explicitly taught to avoid the very "burstiness" that ‘detectors’ now seek as a sign of humanity. A good composition flowed smoothly, each sentence building on the last with impeccable logic. We were, in effect, trained to produce text with low perplexity and low burstiness. We were trained to write in precisely the way that these tools are designed to flag as non-human. The bias is not a bug. It is the entire system.
Recent academic studies have confirmed this, finding that these tools are not only unreliable but are significantly more likely to flag text written by non-native English speakers as AI-generated. (And, again, we’re going to get back to this.) The irony is maddening: You spend a lifetime mastering a language, adhering to its formal rules with greater diligence than most native speakers, and for this, a machine built an ocean away calls you a fake.
---
So, when you read my work - when you see our work - what are you really seeing? Are you seeing a robot's soulless prose? Or are you seeing the image of our Standard Eight English teacher, Mrs. Amollo, her voice echoing in our minds - a voice that spoke with the clipped, precise accent of a bygone era - reminding us to connect our paragraphs with a suitable linking phrase? Are you seeing an algorithm's output, or the muscle memory of a thousand handwritten essays, drilled into us until the structure was as natural as breathing?
The question of what makes writing "human" has become dangerously narrow, policed by algorithms that carry the implicit biases of their creators. If humanity is now defined by the presence of casual errors, American-centric colloquialisms, and a certain informal, conversational rhythm, then where does that leave the rest of us? Where does that leave the writer from Lagos, from Mumbai, from Kingston, from right here in Nairobi, who was taught that precision was the highest form of respect for both the language and the reader?
It is a new frontier of the same old struggle: The struggle to be seen, to be understood, to be granted the same presumption of humanity that is afforded so easily to others. My writing is not a product of a machine. It is a product of my history. It is the echo of a colonial legacy, the result of a rigorous education, and a testament to the effort required to master the official language of my own country.
Before you point your finger and cry "AI!", I ask you to pause. Consider the possibility that what you're seeing isn't a lack of humanity, but a form of humanity you haven't been trained to recognise. You might be looking at the result of a different education, a different history, a different standard.
You might just be looking at a Kenyan, writing. And we’ve been doing it this way for a very long time.
No posts
https://www.aiweirdness.com/dont-use-ai-detectors-for-anythi...
This would have been my first question to the parent, that I guess he never had similar correspondence with this friend prior to 2023. Otherwise it would be hard to convince me without an explanation for the switch (transition duuing formative high school / college years etc).
It's not that it doesn't exist in my native language, but I don't remember seeing them very often outside of print books, and I even know a couple typo nerds.
Maybe I'm totally off, and maybe it's the same as double spacing after a '.'. I had not heard of this until I was ~30 and then saw some Americans writing about it.
Wow, you really do under/over estimate some of us :)
If you're using it to write in programming language, you often actually get something that runs (provided your specifications are good - or your instructions for writing the specifications are specific enough!) .
If you're asking for natural language output ... yeah... you need to watch it like a hawk by hand - sure. It'd be nice if there was some way to test-suite natural language writing.
People are unplugging their brains and are not even aware that their questions cannot be answered by llms, I witnessed that with smart and educated people, I can't imagine how bad it's going to be during formative years
> Your kernel is actually being very polite here. It sees the USB reader, shakes its hand, reads its name tag… and then nothing further happens. That tells us something important. Let’s walk this like a methodical gremlin.
It's so sickly sweet. I hate it.
Some other quotes:
> Let’s sketch a plan that treats your precious network bandwidth like a fragile desert flower and leans on ZFS to become your staging area.
> But before that, a quick philosophical aside: ZFS is a magnificent beast, but it is also picky.
> Ending thought: the database itself is probably tiny compared to your ebooks, and yet the logging machinery went full dragon-hoard. Once you tame binlogs, Booklore should stop trying to cosplay as a backup solution.
> Nice, progress! Login working is half the battle; now we just have to convince the CSS goblins to show up.
> Hyprland on Manjaro is a bit like running a spaceship engine in a treehouse: entirely possible, but the defaults are not tailored for you, so you have to wire a few things yourself.
> The universe has gifted you one of those delightfully cryptic systemd messages: “Failed to enable… already exists.” Despite the ominous tone, this is usually systemd’s way of saying: “Friend, the thing you’re trying to enable is already enabled.”
If you ask an AI to grade an essay, it will grade the essay highest that it wrote itself.
Was this written by AI? Because right there we've got "three adjectives where one will do", and failing your own advice on "avoid being overly verbose"
[1] https://www.merriam-webster.com/grammar/very-unique-and-abso...
> Well, also he will notice in the course of time, as his reading goes on, that the difference between the almost right word and the right word is really a large matter—’tis the difference between the lightning-bug and the lightning.
But also:
> Unconsciously he accustoms himself to writing short sentences as a rule. At times he may indulge himself with a long one, but he will make sure that there are no folds in it, no vaguenesses, no parenthetical interruptions of its view as a whole.
I can see the appeal in, perhaps, technical writing but even there, I feel that there's room to make the prose more colourful.
"God rest ye merry gentlemen" changes in tone and meaning depending on where you put the comma in that sentence.
Open a collegiate dictionary to a series of random pages, checking the first word to see if you can give any vague definition of it. A fluent speaker who doesn't read literature will likely be able to for fewer than 1/4th of them. A decent literary vocabulary would know ~2/3 or more imo.
> Then I went to an Engineering College, and it teaches us to distill everything into it's simpler fundamental components. I like it, and I now want people to be as direct as possible.
I've not quite yet gotten to the "and now I want people to be as direct as possible stage", but occasionally I've had to deal with exceedingly elliptical writing (and speech!) and then -yes- I feel exactly that.
English is my third language, and my first two are Romance languages. Over in Europe and parts of LatAm florid language is prized, as you know. I had to unlearn that stuff.
Never thought of Strunk & White as being distinctly American, but I guess you have a point.
You got the first bit right. Language and clothing accord to fashions.
What counts as gaudy versus grounded, discreet versus disrespectful—this turns on moving cultural values. And those at the top implicitly benefit from this drift, which lets us dismiss as gaudy someone wearing a classic hand-me-down who isn’t clued into a hoodie and jeans being the surfer’s English to Nairobi’s formality.
(Spiced food was held in high regard in ancient Rome and Medieval European courts. Until spices became plentiful. Then the focus shifted "to emphasize ingredients’ natural flavors" [1]. A similar shift happened as post-War America got rich. Canned plenty and fully-stocked pantries made way for farm-to-table freshness and simple seasonings. And now, we're swinging back towards fuller spice cabinets as a mark of global taste.)
[1] https://historyfacts.com/world-history/article/how-did-salt-...
Very much the same; many a US writer's prose is terribly tedious, it comes across just as clinical as their HOA-approved suburban hellscapes. Somebody once told me a writer's job is also to expand language. It wasn't a US citizen.
Yeah, a lot's hiding in the "almost", there. I've said this on this board before, but I have to write a lot of documentation for non-technical users, and the maximally-straightforward stuff doesn't get read, far less remembered. When I mix in some personalization, and a bit of imagination, it gets much better results. The example that most easily springs to mind was something like "if you don't regularly use this system, you can skip the next bit and come back to it when you have to; if you do, then imagine you're a squirrel", and then I named all the variables after nuts, and analogized choices between burying data underground versus storing in a tree. I know typical HN engineers would hate that sort of thing, but you have to know your audience before you can decide what works best.
And I have to say, without the prose and lyrics it would be a read so dry, it'd rival silica gel beads.
It feels to me like in between communication and self-expression there lies a secret third thing. Not only sharing knowledge, but sharing it with joy.
I tend to struggle with art when I can’t tell whether it’s supposed to be funny, but I’m finding it funny (I’ve been very slow to warm up to hip-hop for this reason, and metal remains inaccessible to me because of it). Something clicked on that second approach and I just got that yes, it’s pretty much all supposed to be funny, down to every word, even when it seems serious—until, perhaps, he blind-sides you with something actually deeply affecting and human (I think about the fire-fighting sequence from that book all the time).
Dickens is an all-dessert meal, except sometimes he sneaks a damn delicious steak right in the middle. Like, word-for-word, I’d say he leans harder into humor, by a long shot, than someone like Vonnegut, even. But almost all of it’s dead-pan, and some of it’s the sort of humor you get when someone who knows better does poorly on purpose, in calculated ways. If you ever think you’re laughing at him, not with… I reckon you’re probably wrong.
What’s perhaps most miraculous about this turn-around is that I usually don’t enjoy comedic novels, but once I figured Dickens out, he works for me.
(To your broader point—yeah, agreed that this sucks, good advice for bad writers becoming how most judge all writers has been harmful)
Conveying meaning is the whole problem here. An unexpected word choice is a neon sign saying "This is important!" and it disappoints the reader if it is not.
I’m biased because I am not a very good writer, but I can see why in a book you might want to hint at how someone walked up to someone else to illustrate a point.
When writing articles to inform people, technical docs, or even just letters, don’t use big vocabulary to hint at ideas. Just spell it out literally.
Any other way of writing feels like you are trying to be fancy just for the sake of seeming smart.
We don't really do it intentionally in English, at least to the same degree. But there's still a lot of information coded in our word and grammar choices.
I would not understand the last two sentences. Sidle? Tromp? I don't think I've seen these words enough times for them to register in my mind.
"Strode", I would probably understand after a few seconds of squeezing my brain. I mean, I sort of know "stride", but not as an action someone would take. Rather as the number of bytes a row of pixels takes in a pixel buffer. I would have to extrapolate what the original "daily English" equivalent must have been.
But hey, at least you know I didn't use ChatGPT to conjure that comment.
The context is really necessary.
> The only "stride" I know relates to the gap betweeb heterogeneous elements in a contiguous array
I am also not a native English speaker, but I got to know the verb to "to stride" from The Lord of the Rings: Aragorn is originally introduced under the name "Strider":
> https://en.wikipedia.org/w/index.php?title=Aragorn&oldid=132...
"Aragorn is a fictional character and a protagonist in J. R. R. Tolkien's The Lord of the Rings. Aragorn is a Ranger of the North, first introduced with the name Strider and later revealed to be the heir of Isildur, an ancient King of Arnor and Gondor."
(native english speaker who was a bookworm as a kid; I admittedly had to ask gemini to recall the general phrase that I had in mind)
> “the American MFA system, spearheaded by the infamous Iowa Writers’ Workshop” as a “content farm” first designed to optimize for “the spread of anti-Communist propaganda through highbrow literature.” Its algorithm: “More Hemingway, less Dos Passos.”
https://www.openculture.com/2018/12/cia-helped-shaped-americ...
What you call "flowery" is actually "expressive". Different words, although related, convey subtle differences in meaning. That's what literature (especially poetry) is about.
I would add that our words define our world: a richer vocabulary leads to more articulated experiences.
So, writing "flowery" sentences can actually denote someone capable of conveying the rich gradient of experience into words. I consider it as a plus.
Various registers representing a huge proportion of US English we see and hear day-to-day are terrible. American “Business English” is notably bad, and is marked by this sort of fake-fancy language. The dialect our cops use is perhaps even worse, but at least most of us don’t have to read or hear it as much as the business variety.
Many all-time great writers, Hemingway being the leading exemplar, completely disagree.
I could say "Trump's unpredictable, seemingly irrational policy choices have alienated our allies, undermined trust in public institutions, and harmed the US economy"
Or I could "The economy sucks and it's Trump's fault because he's dumb and an asshole"
They both communicate the same broad idea - but which communicates it better? It depends on the audience.
That would probably mess up any screen reader, but it also didn't work on a regular Firefox :)
In Menlo font (Chrome on Mac's default monospace font, used for HN comments) em-dash(—) and en-dash (–) use the same glyph, though.
Without shift it's an en dash (–), with shift an em dash (—). Default X11 mapping for a German keyboard layout, zero config of mine.
LPT: on Android, pressing and holding a punctuation key on the on-screen keyboard reveals additional variations of it — like the em-dash, for example.
This is the №1 feature I expect everyone to know about (and explore!), but, alas, it doesn't appear to be the case even on Hackernews¹.
On Windows, pressing Win+. pops up an on-screen character keyboard with all the symbols one may need (including math symbols and emojis).
MacOS has a similar functionality IIRC.
And let's not forget that software like MS Word automatically correct dashes to em-dashes when appropriate — and some people may simply prefer typing text in a word processor and copy-pasting from it.
Anyway...
_____
¹ For example, holding "1" yields the superscript version, enabling one to format footnotes properly with less effort than using references in brackets², yet few people choose to do that.
² E.g. [2]
https://www.pangram.com/blog/pangram-predicts-21-of-iclr-rev...
ChatGPT speaking African English was mostly just 3.5. 4o speaks like a TikTok user from LA. 5 seems kind of generic.
You can't stop it from doing the "if you like I can <three different dumb followup ideas>" thing in every reply either.
But yes the current commercial ones are somewhat controllable, much of the time.
The tests were even worse. They exercised the code, tossed the result, then essentially asserted that true was equal to true.
When I told it what was wrong and how to fix it, it instead introduced some superfluous public properties and a few new defects without correcting the original mistake.
The only code I would trust today's agents with is so simple I don't want or need an agent to write it.
Cyprus, Somalia, Sierra Leone, Kuwait, Tanzania, Jamaica, Trinidad and Tobago, Uganda, Kenya, Malawi, Zambia, Malta, Gambia, Guyana, Botswana, Lesotho, Barbados, Yemen, Mauritius, Eswatini (Swaziland).
If what you're saying is right then you'd have to admit Jamaican and Barbados English are just the same as Kenyan or Nigerian... but they're not. They're radically different because they're radically different regions. Uganda and Kenya being similar is what I would expect, but not necessarily Nigeria...
Here are some random examples from one of the (at least) half-dozen LLM-co-written posts that rose high on the front page over the weekend:
https://blog.canoozie.net/disks-lie-building-a-wal-that-actu...
You write a record to disk before applying it to your in-memory state. If you crash, you replay the log and recover. Done. Except your disk is lying to you.
This is why people who've lost data in production are paranoid about durability. And rightfully so.
Why this matters: Hardware bit flips happen. Disk firmware corrupts data. Memory busses misbehave. And here's the kicker: None of these trigger an error flag.
Together, they mean: "I know this is slower. I also know I actually care about durability."
This creates an ordering guarantee without context switches. Both writes complete before we return control to the application. No race conditions. No reordering.
... I only got about halfway through. This is just phrasing, forget about the clickbaity noun-phrase subheads or random boldface.
None of these are representative (I hope!) of the kind of "sophisticated" writing meant to reinforce class distinctions or whatever. It's just blech LinkedIn-speak.
You can check both in ChatGPT settings.
What I have seen is ChatGPT and Claude battling it out, always correcting and finding fault with each other's output (trying to solve the same problem). It's hilarious.
Because you told it to improve it. Modern LLMs are trained to follow instructions unquestioningly, they will never tell you "you told me to do X but I don't think I should", they'll just do it even if it's unnecessary.
If you want the LLM to avoid making changes that it thinks are unnecessary, you need to explicitly give it the option to do so in your prompt.
Don't think that I don't hold myself to the same standards I am pushing here, verbosity has always been a problem for me, and AI verbosity is a good and necessary reminder for me to curb it.
You can always choose uncommon more descriptive words
In spanish you could say "repare algo" ("I fixed") or "parapetee algo" ("I Jury-rigged") and plenty would not know of the cuff what the second one means
People either know, look it up or figure it out via context
And that's what I mean in that modern writing, on the internet - though rapidly leaking into 'real life', has become highly infantilized where we assume everybody reading is an idiot, and speak accordingly which, in turn, infantilizes and 'idiotizes' our own speech, and simply makes it far more bland and less expressive.
Interestingly, this is not ubiquitous. In other cultures, including on the internet, there remains much more use of irony, and more general nuance in speech. I suspect a big part of the death of English fluency was driven by political correctness - zomg what if somebody interprets what I'm saying the wrong way!?!
[1] - https://www.gutenberg.org/files/2895/2895-h/2895-h.htm
The CIA's problem with Dos Passos was that the was left-wing.
It's "flowery" when you dislike it and "expressive" when you like it.
It’s “overcomplicated” when you don’t get it and “nuanced” when you do.
It’s “pretentious” when it annoys you and “ambitious” when it excites you.
It’s “loud” when you hate it and “energetic” when you love it.
Just like TFA, different people write differently and different people have different opinions.
Or just find the appropriate 'simple' word, which is very often available.
Ugh. They say different things. The first describes the policy mechanisms and impacts. The second says nothing about those things; it describes your emotions.
The biggest communication problem I see now is people, especially on the Internet, including on HN, use the latter for the former purpose and say nothing.
Trump was much closer to saying “The immigrants are taking your jobs.” Well, to a labor market analyst, that’s not remotely the same thing at all as saying “US employers and political donors are colluding to confiscate your most valuable rights without market-based compensation, while denigrating you as lazy and stupid, and hiding behind a veneer of excellence and xenophilia as they economically undermine your families.” But it’s much easier, isn’t it?
People shouldn't use "strides" just because "walked" is boring. They should use "strides" when it's meaningful in the context of the story.
Spelling it out literally is precisely what the GP is doing in each of the example sentences — literally saying what the subject is doing, and with the precision of choosing a single word better to convey not only the mere fact of bipedal locomotion, but also the WAY the person walked, with what pace, attitude, and feeling.
This carries MORE information about in the exact same amount of words. It is the most literal way to spell it out.
A big part of good writing is how to convey more meaning without more words.
Bad writing would be to add more clauses or sentences to say that our subject was confidently striding, conspiratorially sidling, or angrily tromping, and adding much more of those sentences and phrases soon gets tiresome for the reader. Better writing carries the heavier load in the same size sentence by using better word choice, metaphor, etc. (and doing it without going too far the other way and making the writing unintelligibly dense).
Think of "spelling it out literally" like the thousand-line IF statements, whereas good writing uses a more concise function to produce the desired output.
Bad writers, of course, pick a word to make them seem smarter (which, of course, often fails). That's what the OP was complaining about: using a fancy word just to impress.
But "stride" is not just a fancy version of "walk". When a person strides they are taking big steps; their head is held high, and they are confident in who they are and where they're going.
"Sidle" is the opposite. A person who sidles is timid and meek; they walk slowly, or maybe sideways, hoping that no one will notice them.
And "tromp," of course, sounds like something heavy and dour. A person who tromps stamps their feet with every step; you hear them coming. They are angry or maybe clumsy and graceless.
"He kick-flipped up to Helen and asked, 'What are you doing?'"
[edit] electric-slid! Pirouetted! Somersaulted!
Ugh, and journalists often slip into cop dialect in their articles. It's disgustingly propagandic.
Notice that cops never kill or shoot someone, even in situations where they're blatantly in the wrong. It's always, "service weapon was discharged" or "subject was fired upon." Make sure to throw a couple "proceeded to's" in there for good measure.
my significant other loves the "real life mormon housewives" and "lovingly blind" reality shows, and when they use business english (a weird thing to do when talking about relationships, but hey, what do I know I'm an engineer) it's a tell that they're lying.
I think it depends on what models you are using and what you're asking them to do, and whether that's actually inside their technical abilities. There are not always good manuals for this.
My last experience: I asked claude to code-read for me, and it dug out some really obscure bugs in old Siemens Structured Text source code .
A friend's last experience: they had an agent write an entire Christmas-themed adventure game from scratch (that ran perfectly).
I am, it's on the default German X11 keyboard layout. Same for · × ÷ …
And that's without going to the trusty compose key (Caps Lock for me)… wonders like ½ and H₂O await!
Angled quotes I use only on systems on which I've configured a compose key, or Android when I'm typing Chinese.
I don't like any kind of auto-replacement with physical keyboards, so I turn off "smart quotes" on macOS.
Anyway I use characters like that all the time, but it's never auto-replace.
I don't think you're able to set either the developer or system prompt on ChatGPT, you're gonna have to use the OpenAPI (or something else) to be able to set that. Once you have access to setting text in those, you can better steer how the responses are.
I've had a "trigger finger" for Alt+0151 on Windows since 2010 at least.
They're radically different predominantly at the street level and everyday usage, but the kind of professional English of journalists, academics and writers that the author of the article was surrounded by is very recognizable.
You can tell an American from an Australian on the beach but in a journal or article in a paper of record that's much more difficult. Higher ed English with its roots in a classical British education you can find all over the globe.
But yeah, I can imagine a multi-modal model actually might have more information and common sense than a human in a (for them) novel situation.
If only to say "don't be an idiot", "pick higher ground" . Or even just as a rubber duck!
"I cannot imagine figuring out how to raise a newborn without ChatGPT. Clearly, people did it for a long time, no problem."
Basically he didn’t know much about newborn and relied on ChatGPT for answers. That was a self deprecating attempt on a late night show. Like every other freaking guests would do, no matter how cliché. With a marketing slant of course. He clearly said other people don’t need ChatGPT.
Given all of the replies on this thread, HN is apparently willing to stretch the truth if Sam Altman can be put under any negative light.
https://www.benzinga.com/markets/tech/25/12/49323477/openais...
I just checked settings, apparently I had it set to "nerdy," that might be why. I've just changed it to "efficient," hopefully that'll help.
English article:
https://www.heise.de/en/news/38C3-AI-tools-must-be-evaluated...
If you speak German, here is their talk from 38c3: https://media.ccc.de/v/38c3-chatbots-im-schulunterricht
"You probably remember your English teacher saying 'the word 'said' is boring, use something different. Yes, find something else, if it makes more sense. But the word 'said' is a perfectly good word."
But a different scene might be better with the pedestrian "walk". Imagine that the main character enters the woman's office with an ostentatious bouquet of flowers. In that scene, maybe the emphasis is on the flowers or on the reaction of the woman or her co-workers. In the scene, a simple "he walked" might work best.
Brevity is the soul of good communication.
But the voice in my head does not.
Pedantry is what makes me better.
If you mean 'communicate information', no. Communication, including written, is for emotion, social expression, and other things before information.
Even information requires those other things to be retained well.
Image: https://media.snopes.com/2016/09/looting.jpg
Snopes: https://www.snopes.com/fact-check/hurricane-katrina-looters/
Every single sentence is way too complicated, vague, deferring, or hand-wavy, and I can't know if they're being honest or just bullshitting me.
Half of the terms are incorrectly or are exaggerations when I probe: "Coupled" means "the code is confusing to me". "Monolith" means "the architecture is complicated to me". "Refactoring" means "adjusting the style". "We need a new abstraction" means "we need a new idea".
The team already had some issues with misunderstandings because of the above.
It's someone so eager to be part of the "big boys club" and trying to push their way to the top.
It's also infuriating.
The stakes are too high and the amount you’re allowed to get wrong is so low. Having been through the infant-wringer myself yeah some people fret over things that aren’t that big of a deal, but some things can literally be life or death. I can’t imagine trying to vet ChatGPT’s “advice” while delirious from lack of sleep and still in the trenches of learning to be a parent.
But of course he just had to get that great marketing sound bite didn’t he?
He said they have no idea how to make money, that they’ll achieve AGI then ask it how to profit; he’s baffled that chatbots are making social media feel fake; the thing you mentioned with raising a child…
https://www.startupbell.net/post/sam-altman-told-investors-b...
https://techcrunch.com/2025/09/08/sam-altman-says-that-bots-...
https://futurism.com/artificial-intelligence/sam-altman-cari...
A reasonable human, given the same task, wouldn't just make arbitrary changes to an already-well-composed sentence with no identified typos and hope for the best. They would clarify that the sentence is already generally high-quality, then ask probing questions about any perceived issues and the context in and ends to which it must become "better".
I think if you want to sound less like an AI, you should cut cut cut, and maybe write a bit more like speech, with sort of slangish structures etc, people won't doubt you anymore.
Good luck !
Take irony / sarcasm / satire. They're pretty dead compared to what they used to be. I can recall a time when just about everything had subtext, but now you kind of have to play it straight. You can't respond to a racist with sarcasm because anyone listening will just think you agree with them.
It's Poe's law across the board. World news brought to you by Not The Onion(tm).
Very true. Take this passage:
‘I am called Strider,’ he said in a low voice. ‘I am very pleased to meet you, Master – Underhill, if old Butterbur got your name right.’
In an early draft Tolkien used a different word as the character was originally a hobbit, rather than a long-legged Ranger:
‘I’m Trotter,’ he said in a low voice. ‘I am very pleased to meet you, Mr — Hill, if old Barnabas had your name right?’
No, don't think so. To compensate, I probably missed the article about the obfuscation of kindle ebooks...
How much they follow it depends. Sometimes they know you wrote it and sometimes they don't. Claude in particular likes to complain to me its system prompt is poorly written, which it is.
Em-Dash was really popular with professional writers.
Go read some Kenyan news. It's very obvious.
For whatever combination of prompt and context, ChatGPT 5.2 did some writing for me the other day that didn't have any of the surface style I find so abrasive. But it could still only express its purported insights in the same "A & ~B" structure and other GPT-isms beneath the surface. Truly effective writers are adept with a much broader set of rhetorical and structural tools.
> The OpenAI CEO said he "got a great answer back" and was told that it was normal for his son not to be crawling yet.
To be fair, that is a relatable anxiety. But I can't imagine Altman having the same difficulties as normal parents. He can easily pay for round the clock childcare including during night-times, weekends, mealtimes, and sickness. Not that he does, necessarily, but it's there when he needs it. He'll never know the crushing feeling of spending all day and all night soothing a coughing, congested one-year-old whilst feeling like absolute hell himself and having no other recourse.
At the same time, their interpretation doesn’t seem that far off. As per your comment, Sam said he “cannot imagine figuring out how” which is pretty close to admitting he’s clueless how anyone does it, which is what your parent comment said.
It’s the difference between “I don’t know how to paint” and “I cannot imagine figuring out how to paint”. Or “I don’t know how to plant a garden” and “I cannot imagine figuring out how to plant a garden”. Or “I don’t know how to program” and “I cannot imagine figuring out how to program”.
In the former cases, one may not know specifically how to do them but can imagine figuring those out. They could read a book, try things out, ask someone who has achieved the results they seek… If you can imagine how other people might’ve done it, you can imagine figuring it out. In the latter cases, it means you have zero idea where to start, you can’t even imagine how other people do it, hence you don’t know how anyone does do it.
The interpretation in your parent comment may be a bit loose (again, I disagree with the use of “literally”, though that’s a lost battle), but it is hardly unfair.
I understand there are things a typical LLM can do and things that it cannot, this is mostly just because I figured it couldn’t do it and I just wanted to see what would happen. But the average person is not really given much information on the constraints and all of these companies are promising the moon with these tools.
Short version: It definitely did not have more common sense or information than a human, and we all know it sure would have given a very confident answer about conditions in the area to this person that were likely not correct. Definitely incorrect if it’s based off a photo.
In my experience when it has to crawl the Internet it’s particularly flaky. The other day I queried who won which awards in the game awards. 3 different models got it wrong, all of them omitted at least 2 categories. You could throw a rock on a search engine and find 80 lists ready to go.
I cannot believe someone will wonder how people managed to decode "my baby dropped pizza and then giggled" before LLMs. I mean, if someone is honestly terrified about the answer to this life-or-death question and cannot figure out life without an LLM, they probably shouldn't be a parent.
Then again, Altman is faking it. Not sure if what he's faking is this affectation of being a clueless parent, or of being a human being.
Seems reasonable to me. If it can't answer that it doesn't work well enough.
Raising a kid is really very natural and instinctive, it's just like how to make it sleep, what to feed it when, and how to wash it. I felt no terror myself and just read my book or asked my parents when I had some stupid doubt.
They feel like slightly more noisy cats, until they can talk. Then they become little devils you need to tame back to virtue.
To counter that kind of nonsense is why we have phrases like "X is bad and you should feel bad for supporting it", and "X is bad, actually", as they don't beat around the bush and simply make one's moral statements clear. Maybe I should have said "repetitive, unpleasant to read, and just bad" to make this usage clearer, but, hey, one can only spend so much time crafting quick comments on HN.
An author's word choices can certainly fail to convey intended meaning, or convey it too slowly because they are too obscure or are a mismatch for the the intended audience — that is just falling off the other side of the good writing tightrope.
At technical paper is an example where the audience expects to see proper technical names and terms of art. Those terms will slow down a general reader who will be annoyed by the "jargon" but it would annoy every academic or professional if the "jargon" were edited out for less precise and more everyday words. And vice versa for the same topic published in a general interest magazine.
So, an important question is whether you are part of the intended audience.
You absolutely can, if you are actually dealing with people listening, because sarcasm is signalled with (among other things) tone (the other things include the listeners contextual knowledge of the speaker.)
You can't do it online, in text, where the audience is mostly strangers who would have to actively dig into your history to get any contextual sense of you as a speaker, because text doesn't carry tone, and the other cues are missing, too.
And by “you can’t”, I mean “you absolutely can, but you have to be aware of the limitations of the medium and take care to use the available tools to substitute for the missing signalling channels”.
That's not true, which field do you believe this to be? Because all of the fields I currently see in ChatGPT do have an effect on your conversations, but they're not just raw injections into system/developer prompts, it's something else.
Try using the API with proper system/developer prompts, then copy-paste that exact same thing into ChatGPT's "personalization settings" and try to have the same conversation, and you'll get direct evidence that it isn't actually the system prompts, but they're injected somewhere into the conversation.
>Clearly, people did it for a long time, no problem.
In fact means Altman thinks the exact opposite of "he didn't know how anyone could raise a baby without using a chatbot" - what he means is that while it's not imaginable, people make do anyway, so clearly it very much is possible to raise kids without chatgpt.
What the gp did is the equivalent of someone saying "I don't believe this, but XYZ" and quoting them as simply saying they believe XYZ. People are eating it up though because it's a dig at someone they don't like.
They will ask “how much water should my newborn drink?” That’s a dangerous thing to get wrong (outside of certain circumstances, the answer is “none.” Milk/formula provides necessary hydration).
They will ask about healthy food alternatives - what if it tells them to feed their baby fresh honey on some homemade concoction (botulism risk)?
People googled this stuff before, but a basic search doesn’t respond with you about how it’s right and consistently feed you emotionally bad info in the same fashion.
In a reversal of the aphorism; those were more complex times. I miss them.
The bulk of that was the History of Middle Earth of which a few volumes cover LoTR.
https://en.wikipedia.org/wiki/The_History_of_The_Lord_of_the...
Saying “no no, he didn’t mean everyone, he was only talking about himself” is not meaningfully better, he’s still encouraging everyone to do what he does and use ChatGPT to obsess about their newborn. It is enough of a representation of his own cluelessness (or greed, take your pick) to warrant criticism.
I distinctly remember both the invention of q-anon and the idea of Trump as a presidential candidate happening on 4chan as a we're-all-in-on-it joke, until true believers started showing up and thinking we believed too. Not a joke anymore...
Surely you aren't taking a government at its word on a politically charged case? Need we trudge out the Chicago 7 again?
4chan was created in 2003. Trump's first bid for the Presidency was an attempt at the Reform Party nomination dropped early in the primary season—in 2000, the one cycle when that party had access to federal matching funds but wasn't effectively a vehicle for H. Ross Perot. Another Trump bid was a recurring topic of discussion in serious, if speculative, contexts ever since (and, for that matter, the idea of a Trump presidential run had been even before the first bid, back to the 1980s, as I recall.) It certainly is not an idea that first emerged as a 4chan joke.
That said, of course Altman is being cynical about this. He's just marketing his product, ChatGPT. I don't believe for a minute he really outsources his baby's well-being to an LLM.
Also, thank you for encouraging me to read the Wikipedia article https://en.wikipedia.org/wiki/2000_Reform_Party_presidential....
And if you’re saying all of this because you agree with them and their actions, at least have the courage to state you support terrorism directly.