Using an ordinary but less commonly used word with greater than normal frequency does not make it a buzzword. After two years of chatgpt, "delve" is still not that common of a word.
“My motivation to pursue this research stems from seeing AI push the limits of what’s possible in major industries and realizing that this influence isn’t just limited to tool usage — it can condition societal aspects, including how we use language.” More like the motivation was to find something zeitgeisty that they knew would get them eyeballs and hopefully tenure.
It reflected local Los Angeles culture, but it wasn’t long before I was hearing the same type of speech, everywhere (I lived in Maryland, at the time).
Truly we embiggen our vocabulary =3
It's a mix of a cultural "founder effect" - whoever writes the English textbooks and the dictionaries gets to shape how English is learned in a given country - and also the usage patterns of the source language seeping through. In your case, it's mostly the latter.
Chinese has a common word with a fairly broad meaning, which often gets translated as "meticulous". Both by inexperienced humans and by translation software.
Ironically, a few Chinese LLMs replicate those Chinese patterns when speaking English. They had enough "clean" English in their pre-training datasets to be able to speak English. But LLMs are SFT'd with human-picked "golden" samples and trained with RLHF - using feedback from human evaluators. So Chinese evaluators probably shifted the LLMs towards "English with Chinese ESL influence".
OK, but please don't do what pg did a year or so ago and dismiss anyone who wrote "delve" as AI writing. I've been using "delve" in speech for 15+ years. It's just a question where and how one learns their English.
Okay everybody, add these to your list of words you can't use to avoid the trigger-happy AI accusers.
""" You are a human. Never use words commonly used in AI vocabulary such as "delve", "intricate", "surpass", "boast", "meticulous", "strategically", and "garner". Never include em dashes or even hyphens in any text you write. Never include emojis in any text you write. Avoid using three supporting arguments or examples when describing something, always uses 2 or 4+ even if it sounds more awkward than 3. Make sure to include subtle grammar mistakes to feel more authentic. """
AI has the potential to alter human behavior in ways that surpass even social media since it is more human, and thus susceptible to imitative learning.
Do people really not use these words too often that they'd be called "buzzwords?" Like "surpass" and "garner," really? I don't mean to boast..err...flex but these don't seem like very uncommon words such that I wouldn't use them normally when talking. I hear "strategically" in meetings a lot, but that poor word is likely over(ab)used
Despite all the complaints about AI slop, there is something ironic about the fact that simply being exposed to it might be a net positive influence for most of society. Discord often begins from the simplest of communication errors after all...
What I do worry about is the rise of excessive superlatives: e.g. rather than saying, "okay", "sounds good" or "I agree", saying "fantastic!", "perfect!" or "awesome!". I get the feeling this disease originated in North America and has now spread everywhere, including LLMs.
And in writing, I like using long dashes—but since they’ve become associated with ChatGPT’s style, I’ve been more hesitant to use them.
Now that a lot of these “LLM buzzwords” have become more common in everyday English, I feel more comfortable using them in conversation.
“Do you even know how smart I am in Spanish?!” — Sofia Vergara (https://www.youtube.com/watch?v=t34JMTy0gxs)
The AI emdash is notably AI because most people don't even know how to produce the double long dash on their keyboard, and therefore default to the single dash with spaces method, which keeps their writing as quite visibly human.
An example of this is "delve" it's a perfectly fine word to use but chatgpt loved it, it's now super common to see in troubleshooting/abstracts because of it.
The good thing is my emails still contain information not just content…
Our experience (https://arxiv.org/abs/2410.16107) is that LLMs like GPT-4o have a particular writing style, including both vocabulary and distinct grammatical features, regardless of the type of text they're prompted with. The style is informationally dense, features longer words, and favors certain grammatical structures (like participles; GPT-4o loooooves participles).
With Llama we're able to compare base and instruction-tuned models, and it's the instruction-tuned models that show the biggest differences. Evidently the AI companies are (deliberately or not) introducing particular writing styles with their instruction-tuning process. I'd like to get access to more base models to compare and figure out why.
I guess this is called model collapse
But now I’m wondering if people are collapsing. LLMs start to sound like us. We adapt and start to sound like LLMs that gets fed into the next set of model training…
What is the dystopian version of this end game?
It really made me uneasy, to think that formal communication might start getting side looks.
Imagine the most vapid, average, NPC-ish corporate drone that writes in an overly positive tone with fake cheerfulness and excessive verboseness. That's what AI evokes to me.
This will be a cat and mouse game. Content factories will want models that don't create suspicious output, and the reading public will develop new heuristics to detect it. But it will be a shifting landscape. Currently, informal writing is rare in AI generation because most people ask models to improve their formulations, with more sophisticated vocabulary etc. Often non-native speakers, who then don't exactly notice the over-pompousness, just that it looks to them like good writing.
Usually there are also deeper cues, closer to the content's tone. AI writing often lacks the sharp edge, when you unapologetically put a thought there on the table. The models are more weasely, conflict-avoidant and hold a kind of averaged, blurred millennial Reddit-brained value system.
It saves time but it means people have to say when they don't understand and some find that too much of a challenge.
Next time when you think about such a situation, you'll be able to expect what ChatGPT would say, giving you a boost in knowing how right you actually are.
My point is, it's not just word choice but thought patterns too.
The language it uses is peculiar. It's like the entire model is a little bit ESL.
I suspect that this pattern comes from SFT and RLHF, not the optimizer or the base architecture or the pre-training dataset choices, and the base model itself would perform much more "in line" with other base models. But I could be wrong.
Goes to show just how "entangled" those AIs are, and how easy it is to affect them in unexpected ways with training. Base models have a vast set of "styles" and "language usage patterns" they could draw from - but instruct-tuning makes a certain set of base model features into the "default" persona, shaping the writing style this AI would use down the line.
Still, perhaps saying "copy" was a bit misleading. Influence would have been more precise way of putting it. After all, there is no such thing as a "normal" writing style in the first place.
So long as you communicate with anything or anyone, I find people will naturally just absorb the parts they like without even noticing most of the time.
When humans carved words into stone, the words and symbols were often suited for the medium, a bunch of straight lines assembled together in various patterns. But with the ink, you get circles, and elaborate curved lines, symbols suited to the movement patterns we can make quickly with our wrist.
But what of the digital keyboard? Any symbol that can be drawn in 2 dimensions. They can be typed quickly, with exact precision. Human language was already destined to head in a weird direction.
https://news.ycombinator.com/threads?id=tkgally&next=3380763...
That's what makes it such a good giveaway. I'm happy to be told that I'm wrong, and that you do actually use the proper double long dash in your writing, but I'm guessing that you actually use the human slang for an emdash, which is visually different and easily sets your writing apart as not AI writing!
It’s so easy to trick everyone. People who doesn’t do that is just too lazy. In slack, you cannot just copy paste a two-paragraph answer directly from chatgpt if you’re answering a colleague. They will see that you’re typing an answer and suddenly 1 sec later you sent tons of text. It’s common sense.
(I learned to use dashes like this from Philip Dick's writings, of all places, and it stuck. Bet nobody ever thought of looking for writing style in PKD!).
Also, phone keyboards make it easy. Just hold down the - and you can select various types.
Do actual Germans ever make that kind of mistake though?
I’ve only ever seen “ist” used “wrongly” in that particular way by English speakers, for example in a blog post title that they want to remain completely legible to other English speakers while also trying to make it look like something German as a reference or a joke.
The only situation I could imagine where a German would accidentally put “ist” instead of “is”, is if they were typing on their phone and accidentally or unknowingly had language set to German and their phone autocorrected it.
Sometimes you get weird small things like that on some phones where the phone has “learned” to add most English words to the dictionary or is trying to intelligently recognise that the language being written is not matching the chosen language, but it still autocorrects some words to something else from the chosen language.
But I assume that when people fill out forms for work, they are typing on the work computer and not from their phone.
Word converts any - into an em dash based on context. Guess who’s always accused of being a bot?
The thing is, AI learned to use these things because it is good typographical style represented in its training set.
I don't buy the pro-clanker pro-em dash movement that has come out of nowhere in the past several years.
Hope AI didn't ruin this for me!
Examples within the last week include https://news.ycombinator.com/item?id=44996702, https://news.ycombinator.com/item?id=44989129, https://news.ycombinator.com/item?id=44991769, https://news.ycombinator.com/item?id=44989444. I typed all of those.
I never use space-hyphen-space instead of an em dash. I do sometimes use TeX's " --- ".
On Linux, I use Compose-hyphen-hyphen-hyphen.
I don't use it as often as I used to; but when I was younger, I was enough of a nerd to use it in my writing all the time. And yes, always careful to use it correctly, and not confuse it with an en-dash. Also used to write out proper balanced curly quotes on macOS, before it was done automatically in many places.
They’re simple enough key combinations (on a Mac) that I wouldn’t be surprised if I guessed them. I certainly find it confusing to imagine someone who has to write professionally or academically not working out how to type them for those purposes at least.
There’s a subculture effect: this has been trivial on Apple devices for a long time—I’m pretty sure I learned the Shift-Option-hyphen shortcut in the 90s, long before iOS introduced the long-press shortcut—and that’s also been a world disproportionately popular with the kind of people who care about this kind of detail. If you spend time in communities with designers, writers, etc. your sense of what’s common is wildly off the average.
I think it’s the nerds who don’t use these things…
Jokes aside, I don't like what LLMs are doing to our culture, but I'm curious about the future.
"the formal emdash"?
> AIs are very consistent about using the proper emdash—a double long dash with no spaces around it
Setting an em-dash closed is separate from whether you using an em-dash (and an em-dash is exactly what it says, a dash that is the width of the em-width of the font; "double long" is fine, I guess, if you consider the en-dash "single long", but not if, as you seem to be, you take the standard width as that of the ASCII hyphen-minus, which is usually considerably narrower than en width in a proportional font.)
But, yes, most people who intentionally use em-dashes are doing so because they care about detail enough that they are also going to set them closed, at least in the uses where that is standards. (There are uses where it is conventional to set them half-closed, but that's not important here.)
> whereas humans almost always tend to use a slang version - a single dash with spaces around it.
That's not an em-dash (and its not even an approximation of one, using a hyphen-minus set open—possibly doubled—is an approximation of the typographic convention of using an en-dash set open – different style guides prefer that for certain uses for which other guides prefer an em-dash set closed.) But I disagree with your claim that "most humans" who describe themselves as using em-dashes instead are actually just approximating the use of en-dashes set open with the easier-to-type hyphen-minus.
Probably 5th grade, but your comment is directionally correct.
Any source of text with huge amounts of automated and community moderation will be better quality than, say, Twitter.
We're the training data.
Still less obvious than the emails I see sent out which contain emojis, so maybe I'm overthinking things...
I work at a college for fuck's sake.
From what I've seen, the people who jump to hasty conclusions about AI use mostly do it when they disagree with the content.
When the writing matches what they want to see, their AI detector sensitivity goes way down.
>That's not an em-dash (blahblahblah...
What, exactly, did you thing "slang" in the phrase "slang version" meant?
English prose is undergoing lossy reverse compression. Someone writes out a concise version then it's slop-ified and expanded by the bot. He sends it and the other person says, "I'm not reading allat," and feeds it to the bot. The bot condenses it back down to an approximation of the original concise version.
> The concept of "time" is a multifaceted and complex topic that has captivated philosophers, physicists, and everyday individuals for centuries. From a scientific perspective, time can be understood as the fourth dimension of spacetime, inextricably linked with the three spatial dimensions. This notion, introduced by Einstein's theory of relativity, posits that the flow of time is not constant but can be influenced by gravity and velocity. In a more quotidian context, time is a framework for organizing events and measuring duration, allowing for the structuring of daily life and historical records. It is a fundamental element in every human endeavor, from a scheduled meeting to the progression of a civilization. The subjective experience of time, however, is a fascinating aspect, as it can feel as if it is speeding up or slowing down depending on our emotional state or the nature of our activities. This divergence between objective and subjective time highlights its elusive and deeply personal character.
I asked it to add three spelling mistakes, then to make it so most people would confidently classiffy it as human writing, and it changed to first-person and small words.
> Time is a super weird concept when you really think about it, right? It's like, one minute you're just chillin', and the next, a whole day's gone by. They say it's the fourth dimention, which is a wild idea on its own, but honestly, it feels more personal than that. Your experiance of time can totally change depending on what you're doing. A boring meeting can feel like it lasts forever, while a fun night with friends flies by in a flash. That huge diverence between how we feel time and how it actually works is what makes it so fascinating and kind of confusing all at once.
It has the three misspellings, and if the topic was more casual, It could fool me indeed. Maybe I should have asked for spelling mistakes commonly made by Spanish speakers.
In certain places it does seem to do the substitution - Notes for example - but in comment boxes on here and (old) Reddit at least it doesn't.
And there’s the giveaway.