In a work context, of course, things are a bit different: I want to move the project forward and not jeopardize my future paychecks. Authenticity tends to take a back seat there. However, I’d be more concerned about inefficiency. Is it really necessary to run every piece of communication through ChatGPT to refine the wording? Are you sure nothing gets lost in the process? Doesn’t that end up wasting a lot of work time without adding any real value?
And on top of that, it leads to alienation and frustration. If you talk to me as if you were an LLM, don’t be surprised if I talk to you as if you were an LLM.
I guess everyone using LLMs for text is similar to that. If everyone uses the same LLM style, its hard to understand where the other person is coming from. This is not a problem for technical and precise communication though(the choice of LLMs in that context has other risks).
It is also strictly not an LLM capability problem because they can mimic or retain the original style and just "polish" with enough hints but that takes time, investment and people go through path of least resistance. So, we all end up with similar text with typical AI-isms.
There are other reasons to dislike LLM text like padding and effort asymmetry that have been discussed here enough.
The only thing is that my anecdata contradicts it. My AI cleaned up writing seems to fare much better and this seems to be true across all channels. To be clear I do not mean AI generated just AI cleaned, that is spelling, punctuation, grammar mainly, the occasional word order change.
In the end it's about getting the message across first and "get to know me" second and proper and clear expression helps a lot with the first.
It was about how people would get a thing (a robot?) that would repeat whatever they said but in a more fancy way (or something along those lines), to make them sound smarter. Then the people would start depending on these robots to communicate at all, to the point their speech degrades and they start making unintelligible noises that the robots still translate into actual speech.
EDIT: Found it, from 2014: https://smbc-comics.com/index.php?id=3576
When in the middle of a group text-chat, someone replied with AI-generated blather. It was dead-clear with the usual sterile vocab, structured buzzphrases, and other LLM "tells".
I politely called him out and asked to use his own voice. In public he insisted that it was his voice and that he used AI only for "formatting". But in private he admits that he created a "gem to assist with multicultural comms" that generated it. He claims he did it because "not everyone can take the native American English well". A load of bovine manure. I nicely told him to cut this crap and just write as it comes to him. (Basic spell- and grammar-check is fine.)
But this was the first year I saw it in performance review write-ups which frankly was jarring. Here is feedback supposedly 1:1 that massively affects this person's life and their perception of "worth" so to speak...and it's just AI.
Notably it was split by geography. EU countries closest to organic, india slop trainwreck, US in the middle
Sorta made me conclude "ok i guess that's the end of performance reviews that vaguely mean anything & actually get read"
I think this is the opposite of how most people tend to use LLMs, and I actually think my way is the "better" way. My issue has never been the act of writing well, or clearly expressing what I mean... it has been the inertia of putting words on a page at all.
(and an LLM had nothing to do with this comment :P)
This is not always true. Once there was an online reaction to short content that made people treat "long-form" content as desirable entirely due to its length. I rather like reading books and the New Yorker's fiction section when I still subscribed, but much of this "long-form" content was token-expansion of a formulaic nature which I did not enjoy. LLMs have mastered this kind of long-form token-expansion.
This is assuming people are using an LLM in good faith, obviously. One day, perhaps LLMs will learn to express what someone is saying in an elegant way that is enjoyable for people like me to read. But even then, I will have the difficulty of distinguishing whether this is a human speaking through an LLM in good faith or a human who has set up a machine that is set up to mimic a human.
The latter is undesirable to me because I have access to the best such machines at a remarkably low cost. Were I to desire a conversation with an LLM, it is trivial for me to find one. I'm not coming here for that[0].
A sufficiently insightful LLM which prompts my thinking in certain ways wouldn't be unwelcome to me, I suppose. I have a couple of my friends for whom I still go on Twitter to read what they say even after I have stopped using the site routinely. If I found out the posts were entirely an LLM I think I would still read them simply because I find the posts useful and with sufficiently high signal-to-token.
0: Certainly, if every place only spoke about things I was interested in and never in things I was not interested in, I wouldn't need separation of interest spaces at all. But the variation of interest vectors for different humans has made this impossible.
LLMs shouldn’t be used for communication at all if you want any form of authenticity.
It's one thing to have Claude polish a message and another thing for it to write out an entire message.
There's a huge difference between having AI clean up a text you send privately to someone you have worked closely with for years, versus a broad spectrum text sent by a VP to hundreds of people or more. The first case is reprehensible, for the reasons the author lays out. But as for the second case, corporate doublespeak has been a meme since long before the advent of AI and it would remain even in some AI-pocalypse. Just because your boss puts out sanitized language in a mass communication, doesn't inherently mean your boss won't still be present and real with you in a more private setting.
Edit: And now I forgot the most important. When the knowledge the llm retrieved is insufficient to answer colleagues question or the agent skill can not execute the requested task from my colleague, it asks me just for the missing info or skill and with me (the human) in the loop work is done x times faster. Eventually it will replace me and all my colleagues one day. Looking forward to do other stuff then
Ugh, you are not entitled to get to know me. There is a threshold between all that I share with the world and the rest of me. Hell, not every person gets the same picture, and that's deliberate and healthy--my customers don't get to know what my proctologist knows. My mother doesn't get to know what my wife knows.
You don't get to know all of me, because I don't trust you.
This post comes across as sweet, and innocent. It also comes across as absurdly self-entitled, and it's not an OK posture to take towards the world. It's not OK when the police take this posture, it's not OK when private companies take this posture, and it's not OK when strangers on the internet take this posture.
You are entitled to withdraw from relationships that don't fulfill your emotional needs. A reasonable audience for this missive is your girlfriend, your child (who relies on you), or your employer (to whom you are vulnerable).
It generally has enough "activation energy" to get me over the hump of wherever I've been mentally stuck.
There's grammatical mistakes and then there is sloppiness. Only the second makes me disregard someone's comment.
> I will lose the credibility of my message if there is too much mistake...
The correct way to write this is "if there are too many mistakes", because mistakes are countable and plural. And it's fine to make grammatical mistakes if English is not your native language. You can only get better by practising :-)
(only half-joking, a part of me fears that this is the reality we’re moving towards)
If I asked you for your particular experience on something and got an obvious LLM reply, I might say nothing or I might ask if it was an LLM, but either way I’m unlikely to ask you something or trust you ever again. Which also works for you, I guess, since it’d be one fewer person taking up your time. But if you had instead told me “I’m too swamped to help right now” I would’ve instead offered to help take some burden off your back.
It also sounds like you were overworked and when you started to use LLMs you've stripped yourself of the chance to work with a colleague.
Otherwise, not using em dashes, adding some mistakes and writing more like how you think/talk helps :)
i can ramble without an LLM, and i suppose you can ask an LLM to keep it short. but both are results of not taking the time to craft an appropriate message.
If this is true, you really want to be fired. That is a horrendous work environment, and you should quit if at all possible.
Most workplaces (any certainly any good workplace) will seek to understand, not fire you immediately.
I personally think that the people who can’t be bothered to actually write authentic messages, and assume that everyone will just read their word salad full of repetitive AI patterns, are being the ones acting entitled.
likewise for friends (not just your girlfriend), getting to know you is part of developing friendship.
so family, friends, work, business, that pretty much covers everyone you deal with on a regular basis.
i would go as far as saying that if you don't trust me then you have no business even communicating with me unless the interaction is incidental.
If your comment is at all indicative of how you are in real life, I really don't think you have to worry about people wanting to get to know you.
True: Nobody is entitled to be treated nicely. Nobody is entitled to an open, friendly relationship. Nobody is entitled to get to know you. If we only did what we were entitled to do, and received what we were entitled to receive, the world would be an even shittier place than it already is. We have enough people walking around with the "You're not entitled to me being nice, so I'm not gonna be! nyaaaaa!" attitudes.
For writing as thinking with trouble starting from scratch, LLMs are the most important technology to emerge in my lifetime. Microblogging filled that gap in a way, but it had too many downsides.
It's funny though. For computer to computer conversation, we have invented (deflate+inflate) algorithms to save bandwidth, time and money.
On the other hand for human to human communication, we are in the process of inventing a (inflate+deflate) method and at the same time we are spending insane amounts of time, money & bandwidth to make it possible!
Something I have seen a lot of people talk about in the comments here, as well as do in practice within my company and friends, family, etc., is that they say something and then let Claude or GPT rephrase it to be added as a prompt that they'll then use.
In my experience, this will almost always bring about worse results than if you communicated directly with the LLM. I believe this happens because of a few reasons.
1. LLMs tend to do word inflation in that they'll create plausible-sounding prompts, but the words that they introduce have a higher propensity to create worse cookie-cutter results from other agents, coding assistants, writing assistants, or any other form that has been used.
2. By putting a layer in between what we're saying and what the LLMs interpret, we're not honing our ability to articulate and prompt better and wholly depend on the intermediary getting better or being able to interpret better, which does not translate well in practice.
3. Anecdotal, but in my case, when I was doing this myself, it was because I assumed I was harder to understand and not articulate enough to get good results. So I tried speeding up the results by trying to use an intermediary. What I learned, though, was training myself to be articulate and to not doubt myself was easier than getting results from the LLM interpreters.
of course with anything, ymmv.
We may choose words for a reason, but sometimes we choose the wrong words. Sometimes it may be closely spelled words, and you choose the incorrect version. Sometimes it may be because our understanding of the definition of a word is wrong. Either way, it can be problematic when you say one thing when you meant to say something else.
Now I grew up in the olden days. I reach for a dictionary in such cases. On the other hand, I can certainly understand why people would reach for an LLM. LLMs can examine an entire document at once, it will catch errors that you are not familiar with, and it will catch a much larger range of errors. Is it perfect in doing so? Of course not, but it is better than nothing.
And I use it for myself and what I send to others is 99% written by me.
For people who treat writing, especially business writing as a craft to communicate ideas, seeing AI slop is just like nails on chalk board.
I've copies of https://en.wikipedia.org/wiki/The_Elements_of_Style and https://workingbackwards.com/ and I've been trying to shift us from a slides-first culture or a quick-email-first culture to a serious writing-first culture.
I genuinely care about this stuff. Thoughtlessly blurting out pages and pages of vanilla unedited LLM output seems disrespectful to the reader.
As writer you're saying: I didn't care enough to craft my message personally, here read this generated content I haven't even seriously edited.
And for the reader it's saying the same: This guy sent me a document to read, I need to sift through it to figure out if there's any actual merit or novel ideas or actional information here.
The asymmetry of effort is disrespectful IMO.
There are good arguments to get to know someone "mistakes and all", I just don't think this is a particularly good one. No matter how much you (think) you know someone, they probably know them(selves) better.
Most people in this thread are talking about the output stage. You know: polish my text, fix my grammar, generate my message. That's where you lose your voice. But the blank page problem borski describes isn't really a writing problem, it's a thinking problem. Once you know what you want to say, saying it tends to be the easy part for us writers (sometimes lol!).
The most useful thing I've found is using AI to figure out what I actually think, using it for rubber ducking, exploring angles, stress-testing arguments, and then closing the tab and writing it myself. You get the cognitive help without losing the (or your) soul. I've output more writing in my own genuine voice in the last year than I did in several years prior, and it's because I use AI for clarity instead of replacing my output.
The thing that worries me most is that it's going to redefine the way we write. We absorb language. To compensate for all this AiSpeak I consume, I need to read more literature.
What’s human writing going to look like in a few years if this trend doesn’t stop? I believe that the LLMs will catch up soon and introduce more variance and fewer words designed for impact in their language, delivering us from this AiVerse into one where AI writing is almost indistinguishable from human writing. But until then, we must read more.
“Powered by AI” is a trendy marketing term on every website today. In a couple years it’ll be considered blasé, and while AI features will still exist, they’ll be called something like automation or workflows.
I think I’ve finally pinpointed why it is that I have an allergic reaction when I’m confronted with a text that has me as the intended recipient, but that has been run through an LLM to change or “clean up” the wording - especially if it’s internal communication or even direct communication. Well, at the very least, I think I’ve found the vocabulary to express the idea:
When you run your message through an LLM, it will inevitably obscure what you actually wanted to say; we choose words for a reason after all - even if they’re sometimes not the right words.
But what’s far worse is that it robs the intended recipient of the ability to actually interpret the message according to the accrued knowledge of how you write, and the subtler notes of the tone your message carries, your choice of emphasis or omission, and so on.
As you interact with people, you build this atlas of implicit knowledge about them; it’s the reason why “…we need to talk” coming from two different people might carry vastly different meanings and emotional undercurrents. I know you, and that knowledge informs how I read your text. And if I don’t know you, the words you choose combined with the interactions we have help me build that understanding.
In short: running your texts through the genericizer causes a disruption of the synchronization process between conversational partners. The social handshake component. The unseen fabric that allows us to communicate effectively and honestly. It robs me of getting to know you.
Make mistakes, use idioms that don’t work in English, be too frank or too flowery in your wording, but give me the courtesy of the ability to interpret the message in the context of all aspects that actually went into its creation. Let me get in tune with you.
It's full of fluff. Analogies that sound like something a 12 year old would make, but make no sense when you stop to think about them.
It's full of baloney that the author didn't even intend to communicate.
That's where the "soulless" part comes from. There's no consistent mind behind the writing with opinions of its own, formulated into one understandable framework it's trying to convey. It's just a mishmash of BS that only superficially resembles it made to trick us.
This depends on what you consider AI writing. If I dictate what the AI must write word by word verbatim, is it considered AI writing? Is it something to do about the percentage of the text generated? Does it have to do with the vocabulary the AI knows? What if I don't know any other words than the AI does? Does it have to do with the efficiency of communication?
Nevertheless, I don't think AI writing can ever be human writing. No matter if it uses the same words as a human and it's indistinguishable. This is because humans participate in a society as independent conscious actors and thus communication has meaning. The only way text can become communication is when the writer has intents, they're willing to participate in society.
My favourite people, interactions, stories are all from those who are outside of that bell curve peak. I want weirdness and quirk.
Only with humans, it's admittedly way more fun. :)
I'm curious as to what you mean by this. I assume you don't mean it literally, as that would be trivially falsifiable (for example, the text readout on a digital caliper doesn't have "intents", yet it absolutely communicates meaning), but I can't think of another way that you might have meant it. Could you elaborate?
Chris McCausland says he relies a lot less on others due to AI.
It has been quite effective in showing how diversity can influence opinions when he has been on radio programs and offered his perspective. There have been conversations which have started on the usual circular crapping on AI that I'm sure that everyone here has witnessed themselves that becomes much more nuanced when he says how his life has changed.
That's why diversity is important. Don't do it like Star Trek Discovery which has 'I know! let's use diversity to solve this problem. Great! That was super effective! Now everybody go back to your minor roles'
Like... That. Rhetorical ellipsis. Like you see in a 12 year old's fanfic.
I know one of the AIs had a style change. I think Grok. But it started using drama dots so now they are everywhere.
And unlike the em dash, _nobody_ notices. _Nobody_ sees it.
I just hope the LLMs don’t come for parenthesis for aside comments.
Discovery was just so ham fisted when they tried to make points and missed massive opportunities when they could have been done organicity because of the situation.
I really thought the first extra plus future season was going to be a comment on colonialism, but no they just turned up, said they were the more civilised ones and y'all should join our new improved federation. The opportunity was just sitting there to show people figuring out their own culture and not appreciating an interloper dictating how their lives should be simply because they have a fancy starship.
Not to mention declaring their ship sentient because it dreams. It just screams 'conform to our expectations of what sentience should be and we will accept you as a person' They portrayed the exact opposite of what they intended.
(sorry for the rant, I was mauled by a Federation as a child)
I always thought it was one of those banana hair clips, spray painted gold.
As an admirer of low budget creativity, it’s very inspiring. But it still looks ridiculous.