It's fair enough that you can discard any bad ideas they generate. But by design, the recommendations will be average, bland, mainstream, and mostly devoid of nuance. I wouldn't encourage anyone to use LLMs to generate ideas if you're trying to create interesting or novel ideas.
However, there is a lot of writing that is basically just an old school from of context engineering. While I would love to think that a PRD is a place to think through ideas, I think many of us have encountered situations, pre-AI, where PRDs were basically context dumps without any real planning or thought.
For these cases, I think we should just drop the premise altogether that you're writing. If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.
Not long ago my engineering team was trying to enforce writing release notes so people could be aware of breaking changes, then people groaned at the idea of having to read this. The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.
I think it's going to be awhile before the full impact of AI really works it's way through how we work. In the mean time we'll continue to have AI written content fed back into AI and then sent back to someone else (when this could all be a more optimized, closed loop).
This eloquently states the problem with sending LLM content to other people: As soon as they catch on that you're giving them LLM writing, it changes the dynamic of the relationship entirely. Now you're not asking them to review your ideas or code, you're asking them to review some output you got from an LLM.
The worst LLM offenders in the workplace are the people who take tickets, have Claude do the ticket, push the PR, and then go idle while they expect other people to review the work. I've had to have a few uncomfortable conversations where I explain to people that it's their job to review their own submissions before submitting them. It's something that should be obvious, but the magic of seeing an LLM produce code that passes tests or writing that looks like it agrees with the prompt you wrote does something to some people's brains.
My own experience, however, is that the best models are quite good and helping you with those writing and thinking processes. Finding gaps, exposing contradictions or weaknesses in your hypotheses or specifications, and suggesting related or supporting content that you might have included if you'd thought of it, but you didn't.
While I'm a developer and engineer now, I was a professional author, editor, and publisher in a former life. Would have _killed_ for the fast, often excellent feedback and acceleration that LLMs now provide. And while sure, I often have to "no, no, no!" or delete-delete, "redraft this and do it this way," the overall process is faster and the outcomes better with AI assistance.
The most important thing is to keep overall control of the tone, flow, and arguments. Every word need not be your own, at least in most forms of commercial and practical writing. True whether your collaborators are human, mecha, or some mix.
>Essay structured like LLM output
Hmmm...
This. This is the big distinction. If you like something and/or want to improve it, you do it yourself. If not, you pay someone else to do it. And I think that's ok.
But I guess some people either choose a wrong job or had no other option. I'm happy to not be in that group.
It's worse than this. If someone is working out for you, they still own the outcome of that effort (their physique).
With an LLM people _act_ like the outcome is their own production. The thinking, reasoning, structural capability, modeling, and presentation can all just as easily be framed _as your creation_.
That's why I think we're seeing an inverse relationship between ideation output and coherence (and perhaps unoriginality) and a decline in creative thinking and creativity[0]
[0] https://time.com/7295195/ai-chatgpt-google-learning-school/
I think it's the opposite. People have ideas and know what they want to do. If I need to write something, I provide some bullet points and instructions, and Claude does the rest. I then review, and iterate.
This is why I’m bearish on all of the apps that want to do my writing for me. Expanding a stub of an idea into a low information density paragraph, and then summarizing those paragraphs on the other end. What’s the point?
Unless the idea is trivial, LLMs are probably just getting in the way.
This applies at a business level (most software shops shouldn't have full-time book keepers on staff, for example), but applies even more in the AI age.
I use LLMs to help me code the boring stuff. I don't want to write CDK, I don't want to have to code the same boilerplate HTML and JS I've written dozens of times before - they can do that. But when I'm trying to implement something core to what I'm doing, I want to get more involved.
Same with writing. There's an old joke in the writing business that most people want to be published authors than they do through the process of writing. People who say they want to write don't actually want to do the work of writing, they just want the cocktail parties and the stroked ego of seeing their name in a bookshop or library. LLMs are making that more possible, but at a rather odd cost.
When I write, I do so because I want to think. Even when I use an LLM to rubber duck ideas off, I'm using it as a way to improve my thinking - the raw text it outputs is not the thing I want to give to others, but it might make me frame things differently or help me with grammar checks or with light editing tasks. Never the core thinking.
Even when I dabble with fiction writing: I enjoy the process of plotting, character development, dialogue development, scene ordering, and so on. Why would I want to outsource that? Why would a reader be interested in that output rather than something I was trying to convey. Art lives in the gap between what an artist is trying to say and what an audience is trying to perceive - having an LLM involved breaks that.
So yeah, coding, technical writing, non-fiction, fiction, whatever: if you're using an LLM you're giving up and saying "I don't care about this", and that might be OK if you don't care about this, but do that consciously and own it and talk about it up-front.
> LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too?
Given your endorsement of using LLMs for generating ideas, isn't this the inverse of your thesis? The quote's issue with LLMs is the ideas that came out of them; the prose is the tell. I don't think they'd be happy with LLM generated ideas even if they were handwritten.
I feel like this post is missing the forest for the trees. Writing is thinking alright, but fueling your writing by brainstorming with an LLM waters down the process.
This distinction is important, because (1) writing is not the only way to faciliate thinking, and (2) writing is not neccessarily even the best way to facilitate thinking. It's definitely not the best way (a) for everyone, (b) in every situation.
Audio can be a great way to capture ideas and thought processes. Rod Serling wrote predominantly through dictation. Mark Twain wrote most of his of his autobiography by dictation. Mark Duplass on The Talking Draft Method (1m): https://www.youtube.com/watch?v=UsV-3wel7k4
This can work especially well for people who are distracted by form and "writing correctly" too early in the process, for people who are intimidated by blank pages, for non-neurotypical people, etc. Self-recording is a great way to set all of those artifacts of the medium aside and capture what you want to say.
From there, you can (and should) leverage AI for transcripts, light transcript cleanups, grammar checks, etc.
A large part of our work is about writing documents that no one will read, but you'll get 10 different reminders that they need to get done. These are documents that circulate, need approval from different stake holders. Everybody stamps their name on it, without ever reading it. I used to spend so much time crafting these documents. Now I use an LLM, the stakeholders are probably using an LLM to summarize it, someone is happy, they are filed for the records.
I call these "ceremonies" because they are a requirement we have, it helps no one, we don't know why we have to do it, but no one wants to question it.
Now? I am pushing so much of my writing into prompts into AI where I know the AI will understand me even with lots of typos and run-on sentences... Is that a bad thing? A good thing? I am able to be so much more effective by sheer volume of words, and the precision and grammar is mostly irrelevant. But I am able to insert nuances and sidetracks that ARE passing vital context to AI but may be lost on people. Or at least pre-prompt-writing people.
Nah nearly anyone can go to the gym and get some benefit from it. You don't need to be that skilled to get a personal trainer or just run/walk on a treadmill. Even mice and hamsters can do it.
Writing is way harder for a lot of people and does not in anyway come naturally to most, unlike, say, the ability to move or to speak. Writing requires precision, organization, syntax, grammar etc. which are distinct from the 'idea generation' process. Some people will find they cannot articulate their ideas well in long form and need an LLM, or that the output is so bad that outsourcing the writing to an LLM makes much more sense and saves tons of time. If the alternative for many people is "no writing" vs "LLM assisted writing" the latter may be better.
This is why there can be a disconnect between having a correct "big idea" or otherwise being directionally right, but not being able to articulate it well in written form, as we see with the likes of Elon Musk (his sometimes cringy tweets) or Richard Branson (dyslexia).
People who are eloquent or even just competent at writing way overestimate how common the ability to write well is.
When you outsource the generation and thinking, you're also outsourcing the self-review that comes along with evaluating your own output.
In the office, that review step gets outsourced to your coworkers.
Having a coworker who ChatGPT generates slides, design docs, or PRs is terrible because you realize that their primary input is prompting Claude and then sending the output to other people to review. I could have done that myself. Reviewing their Claude or ChatGPT output so they can prompt Claude or ChatGPT to fix it is just a way to get me to do their work for them.
I've definitely lost something since migrating my Artist's Way morning pages and to the netbook. (Worth it, though, to enable grep—and, now, RAG).
Explaining a design, problem, etc and trying to find solutions is extremely useful.
I can bring novelty, what I often want from the LLM is a better understanding of the edge cases that I may run into, and possible solutions.
I sometimes use them when I'm stuck on something, trying to brainstorm. The ideas are always garbage, but sometimes there is a hint of something in one of them that gets me started in a good direction.
Sometimes, though, I feel MORE stuck after seeing a wall of bad ideas. I don't know how to weigh this. I wasn't making progress to begin with, so does "more stuck" even make sense?
I guess I must feel it's slightly useful overall as I still do it.
That being said I don't think LLMs are idea generators either. They're common sense spitters, which many people desperately need.
I think the only reason it’s seen as good anywhere is there are a lot of tasteless and talentless people who can pretend they created whatever was curled out. This goes for code as well.
If I offend anyone I will not be apologising for it.
It's not the same thing as talking to someone (or a group) about something.
No. Don't pretend your taking shortcuts is less questionable because everyone else is doing it too. We're not. Own it yourself, don't get me involved.
> I am able to be so much more effective by sheer volume of words
If you think value comes from volume of words you really need to understand writing better.
The moment I bring in a conversational element, I want a being that actually has problem comprehension and creativity which an LLM by definition does not.
When you write a document or essay, you are posing a question and then answering it. For example, a PRD answers the question, “What should we build?” A technical spec answers, “How should we build it?” Sometimes the question is more difficult to answer—“What are we even trying to accomplish?” And with every attempt at answering, you reflect on whether you’re asking the right question.
But now, of course, we have LLMs. I’m seeing an increasing amount of LLM-generated documents, articles, and essays. I want to caution against this. Each LLM-generated document is a missed opportunity to think and build trust.
The goal of writing is not to have written. It is to have increased your understanding, and then the understanding of those around you. When you are tasked to write something, your job is to go into the murkiness and come out of it with structure and understanding. To conquer the unknown.
The second order goal of writing is to become more capable. It is like working out. Every time you do a rep on the boundary of what you can do, you get stronger. It is uncomfortable and effortful.
Letting an LLM write for you is like paying somebody to work out for you.
There are social effects to LLM-generated writing too. When I send somebody a document that whiffs of LLM, I’m only demonstrating that the LLM produced something approximating what others want to hear. I’m not showing that I contended with the ideas.
It undermines my credibility as a person who could lead whatever initiative comes out of this document. That’s unfortunate. I could have used this opportunity to establish credibility.
LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too?
LLMs are useful for research and checking your work. They can also work well for quickly recording information or transcribing text (neither of which are what I mean by “writing”, as in “writing an essay”).
They are particularly good at generating ideas. They thrive in this use case because if they generate 10 things and only one is useful, no harm is done. You can take what is useful and leave the rest behind.
These LLMs will increase efficiency in delivering software. But in order to make the most of them, we need a simultaneous rise in our level of thoughtfulness.
Then once I feel like I have addressed all the areas, I ask for a "critical" review, which usually pokes holes in something that I need to fix. Finally have the AI draft up a document (Though you have to generally tell it to be as concise and clear as possible).
Unfortunately they can also validate some really bad ideas.