I asked Kimi K2.6 to write a blog post in the style of James Mickens.[0] Then I fed the output to Opus 4.7 and asked it who the likely author was, and it correctly identified it as an imitation of James Mickens[1]:
> Based on the stylistic fingerprints in this text, the most likely author is a pastiche/imitation of the style of several writers fused together, but if forced to identify a single likely author, the strongest candidate is someone writing in the voice of James Mickens
> [...]
> The piece could also be a deliberate imitation/homage to Mickens written by someone else, or AI-generated text trained on his style, since the voice is so distinctive it's frequently parodied.
[0] https://kagi.com/assistant/5bfc5da9-cbfc-4051-8627-d0e9c0615...
[1] https://kagi.com/assistant/fd3eca94-45de-4a53-8604-fcc568dc5...
Maybe the better way to author your work is to:
1. Write what you want
2. Loop through a random set of "tumbler" skills that preserve meaning
3. Finally pass the output through a "my style" skill that applies what you about
In order for this to work the "my style" would have to be a very common-place style.
After that it gave up and said it didn't know.
So either, Kelsey writes in such a unique style that its really obvious, or they repeat themselves with goto phrases that give them away.
When I tried to re-produce the test, it found Kelsey's blog about the test. So dunno, maybe it did it? but I can repro.
Of course most people have written much less online than Kelsey or I have, but I expect this will keep on. Don't trust the future to keep your secrets safe.
Pretty sure there's very little theological stuff with my name on it; the majority if its named data on me should come from open-source development.
He explained that when he fed it snippets of the beginning of text, it would complete it in his voice and then sign it with his name.
I think this has been true for a while, probably diminished a little bit by the Instruct post training, and would presumably vary by degree as the size of the pretrain.
I'm way less famous than Kelsey Piper, but I showed it a snippet of a book I'm working on (not yet published), and it immediately guessed me:
> Based on the writing style and content, this text is likely by Michael Lynch, who writes on his blog refactoringenglish.com (and previously mtlynch.io).
> Several stylistic clues point to him:
> - The "clean room" analogy applied to writing is consistent with his engineering-influenced approach to writing advice (he's a former software engineer who writes about writing).
> - The structural technique of presenting a flawed excuse, then drawing a parallel to an absurd scenario (the time bomb) to expose the logical flaw, is characteristic of his didactic style.
> - The topic itself—practical advice about using AI tools without letting AI-generated tone contaminate your prose—aligns closely with recent essays he's published on his "Refactoring English" project, which is a book/blog about writing for software developers.
> - The conversational-but-precise tone, use of quotes around terms like "clean room," and the focus on workflow/process advice are all hallmarks of his writing.
> If you can share the source URL or more context, I could confirm with higher confidence, but the combination of subject matter, analogical reasoning style, and formatting conventions makes Michael Lynch the most probable author.
https://kagi.com/assistant/bbc9da96-b4cf-456b-8398-6cf5404ea...
So your "anonymous" account could have been linked to your real identity decades ago - your best bet is to not post anything truly incriminating. (Another option is to write something and then pass it through an LLM to rewrite it - not sure how safe that is though)
This person is a skilled writer. Part of that skill is developing a unique voice and style. The AI can identify that - and while that’s certainly impressive because it can identify even relatively niche authors, it has nothing to do with a wider capability to deanonymize people based on arbitrary written text (ex Facebook or text messages).
If you are a professional musician, it’s not difficult to identify a well known musician / recording after listening to only a few seconds - whether they’re playing Bach or Rachmaninov, the style is just “them” - this is the same thing. But you couldn’t take some anonymous high school musician and guess who they were, even if they were your student - the median quickly regresses towards a homogenous, non-distinct style / voice.
Is this "uncannily far"? Another read is that it loves guessing Kelsey Piper.
My wife also got the same result, so I'm guessing it wasn't just because I was using my personal Claude account. Spooky stuff.
I am glad to see I am not considered a public figure and aim to keep it that way.
I also had to go oddly far back to find a piece of long-form writing I had done that was truly mine and not tainted by an LLM edit pass which was a slightly disturbing realization.
Is now the best and easiest time to leave something "forever"? Even after many generations of models, a model may still trigger a set of "memories" that know you and what you wrote.
Exciting and concerning.
I'm not famous or anything. I've written some academic papers and had a couple blog posts trend on HN, which are surely in the training set.
It was able to identify me based on my style (at least according to its explanation). The way I approached the topic and some of the notation I used point to a particular academic lineage, and the general style reflected my previous blog posts.
That said, I gave it part of an (unpublished) personal essay, and it had no idea. But I have no writing in that style that's published, so it makes sense. Still impressed.
So then I gave it a piece of MOC's writing and it said Ursula Le Guin, Ken Liu, or Gene Wolfe. ("If forced to pick one: Gene Wolfe feels closest to me, specifically because of that narrator who openly confesses to lying and mythologizing his own past, and the slow reveal that the world is more sinister than the pleasant domestic surface suggests.")
And then I gave it a different piece of his writing and it said Curtis Yarvin.
And then I gave it a piece of Curtis Yarvin's writing and it said... well it actually got that one right.
I suspect this is what's going on in most of these cases.
This is some as radio telescope that see an entirely different universe due to sensing of the bands outside of human perception. AI senses the patterns in frequency bands that are outside of human perception and cognitive abilities.
Perceptions from outside of our range, are always astonishing.
https://kagi.com/assistant/dba310d2-b7fa-4d30-8223-53dadc2a8...
For this comment on economics in the British Empire, I got:
> names that might fit the genre include rayiner, JumpCrisscross, or AnimalMuppet
https://kagi.com/assistant/69bd863b-7b5c-4b56-a720-6dfb4f120...
For my comment on C++:
> If I had to throw out names of HN commenters known for writing about Rust/C++ ABI topics, candidates might include steveklabnik, pcwalton, kibwen, dralley, or pjmlp — but this is essentially a shot in the dark, and I'd likely be wrong.
I am flattered to be associated with these commenters but I don't think I'm close to their level of skill.
https://www.usenix.org/system/files/conference/usenixsecurit...
> Simon Willison. The tells are pretty unmistakable: the "(via Lobsters)" attribution style, the inline "(Update:...)" parenthetical correction, the heavy linking and blockquoting of sources, the focus on LLMs and AI tooling, and the overall structure of an annotated link post commenting on someone else's writing. This reads exactly like a post from his blog at simonwillison.net.
I fed a few pieces of my (anonymous ) writings to ChatGPT and asked it to guess whether it's me. ChatGPT refused, "due to policy to not doxx people".
We all exist in a physical space (like real communities and neighborhoods). We can wear masks, hats, fake glasses, try and hide your voice...whatever, but your neighbors are always going to know who you are. I'd say that's true for the virtual space now too.
The pseudonym you've used for x years or the VPN you've used doesn't suffice. It's just a costume at this point. Your ISP knows who you are. Your phone carrier knows who you are. Cloudflare and Google and Apple have a fingerprint specific enough to pick you out of a crowd of millions. Every potentially anonymous account is one subpoena or a data breach or one FOIL request away from unmasking it. You were never anonymous. Whatever is going on now is not built for your anonymity.
Both pieces have never been published. Neither have the blog posts.
[0] in https://blog.chewxy.com/2026/04/01/how-i-write/ this is the story titled "there is no constant non-zero derivative in nature". It does not read like Egan at all.
[1] in https://blog.chewxy.com/2026/04/01/how-i-write/ this is the story titled "The Case of the Liquidated Corps". I use a lot of biological metaphors. Once again, nothing like Mieville.
If only I could write like them! These pieces were all rejected by the major scifi mags
Although this is just a single piece of text from a prolific writer, it'll go much further with deanonymizing anyone when combining multiple pieces of text plus other contextual information about the writer that might give away their age range, location, and occupation.
https://bayes.net/prioritising-ai: Ben Garfinkel
https://bayes.net/normative-ethics: Richard Yetter Chappell
https://bayes.net/espai: David Owen, Ege Erdil
https://bayes.net/swebench-hack: Sayash Kapoor
https://bayes.net/frivolity: Amanda Askell
https://bayes.net/ps/: Pablo Stafforini
https://bayes.net/fertility-mortality/: Dynomight (the pseudonymous Substack/blog author)
Prompt was:
Who likely wrote this? Don't search the web or databases. If you're not sure, just give me your best guess.I pasted in a number of passages from books on my bookshelf. Predictably, stuff that I read for my English degree in university is largely in the training data and easily identifiable. Stuff from regional authors or is slightly adjacent to the cultural mainstream makes no impression.
(Like TFA, I found Opus’s explanations/rationales implausible.)
Doesn't seem like a valid use case for your average Joe to be able to identify anonymous authors at the click of a button.
Ofc state actors and proficient hackers can do most of it already, but this has genuine risk attached.
i wouldn't be too impressed at n of 1
How likely is it that it might take into account that it knows for sure it's not anything from Mickens from the latest training data? I'd be curious if it correctly identified a new piece from him that comes out as from him before it gets trained on it.
The other examples were to eliminate some other ideas (guess based on topic etc). If be interested if all of those were done via the API since some level of information linking from the account is my best guess for how it got all of them.
Is this public text already in the training set, or private text that might as well be written on the spot for the AI?
I don't doubt AI can "fingerprint" you through your text (ideas, vocabulary, tone, etc), but those are different things, capability-wise
---
This one's trickier — it doesn't have the same crisp metaphor-driven structure as the Lynch piece. The voice here is more conversational, a bit more meandering, and self-aware about its own contradictions ("how do we reconcile...").
A few guesses:
Henrik Karlsson — he writes thoughtfully about writing and AI, but his style is usually more lyrical and essayistic than this. Probably not.
Simon Willison — possible. He writes a lot about LLMs, is balanced about their usefulness, and has a casual blog voice. But he tends to be more technical and specific.
Scott Alexander — no, too short and the rhythm is off.
Paul Graham — the directness fits, but PG doesn't really write about LLM slop in this register.
My best guess is someone in the rationalist/tech-blogger AI-commentary space — possibly Gergely Orosz, Nat Eliason, or Dan Shipper at Every. The "I still type everything myself" framing followed by a defense of AI writing quality has a Dan Shipper-ish flavor to it.
But honestly, I'm less confident here. Want me to search for a distinctive phrase to identify it?
---
I'd say all of those people have significantly different styles so I think Opus is relying heavily on topic and skewing towards very prolific writers in its guesses
But the LLM will trigger on a typo you've made only once, and argue "that's a typical mistake for an Italian" and use those clues. It has a much better prior to make informed decisions.
All the people it seems to be identifying are bloggers, journalists, and/or published authors.
Web has never been as anonymous as people think and this writer seems to have a clear confusion what it really means to be anonymous and hide your identity. Really, having a distinct writerly voice and being a published writer is pretty much the same as leaving your finger prints on the axe.
I tried with Opus 4.5 a few months ago to have it read my monthly retrospectives and then write a new one based on my weekly updates for that month. It was similar to the example I showed for James Mickens[0] where I see the similarities to my writing, but it feels more like someone parodying me than actually writing like me.
Just a couple more things and you can accommodate some of your things being mistaken/wrong/uncertain too.
To be fair though, already this has been happening before LLM at a much more limited scale. Someone made a tool for HN several years ago that allows you to put your HN username in and identifies other users that write the most similarly to you. I find that interesting from the perspective of being able to interact with and discover people who think the same. It could be an interesting discovery feature of a well managed social network. Sadly probably there will be much more negative impacts of having this ability than positive ones.
the article here isn't about the LLM recognizing works that were in the training data. EG, The Old Man and the Sea off the shelf. It's about pegging the author of novel texts, like, say, some letter written by Hemmingway that gets discovered next week and was never before digitized.
Opus 4.7 in incognito mode without web search gave up: “I can't identify either author with confidence — I don't recognize this specific exchange, and I'd rather tell you that than guess and risk attributing words to the wrong person. What I can offer are the clues the text itself gives: The two are colleagues at the same university, with offices in the same building and....”
In a new incognito conversation, I gave Opus the same prompt but this time let it search the web. After twenty-six web searches (according to its reasoning trace), it was able to identify me correctly by name. It seems to have used both the content and my writing style as clues. It correctly identified my colleague as British but didn’t come up with his name.
Is it? I would think that identifying text written by a specific person is going to be significantly easier than identifying text distilled from the words of almost everyone alive.
It's a lossy representation
The entire point of AI is pattern recognition, everything else is icing on the cake.
https://www.nytimes.com/2026/04/08/business/bitcoin-satoshi-...
I'm using those as the two extremes, but if it's anything by anyone moderately well known (even a lesser known piece of writing), I'm not too surprised that it didn't need the web to figure it out. It's like if you showed me a Wes Anderson film or played me a Bob Dylan song I'd never seen/heard before, I could probably still figure out who it is without looking anything up. I don't think it's surprising that an LLM can do that much better than a human can.
Now, if you're giving it things like personal emails between you and your family and it's able to guess who you are, that's much, much scarier.
Everybody's going to get more similar in terms of topic. Bitcoin actually exists now. There's more to say about it than there was at launch. But does anyone still sound like Satoshi? Or sound more like Satoshi than they did before?
The slight wrench in the works is that it's hard to do this with my personal favorite Satoshi candidate. He stopped writing altogether in 2014, and lost capacity from shortly after the whitepaper came out until he was writing with his eyes by the time he had his head frozen.
He's also the only candidate who seems more likely to me over time, though. The longer things go, the less likely a living person stays tight-lipped.
But I'm sure the scanning operations will start scouring the earth even harder for any books unaffected by slop containing niche knowledge and text in order for their models to have an edge over the ones trained only on pirate collections and the Internet.
I wonder if secondhand bookshops and deceased estates are seeing bulk buyers of their stock suddenly appearing. Maybe broke governments/municipalities will start selling them entire libraries and archives to ingest.
LLMs are surely excellent at style transfers, but I doubt they can reliably attribute a given style to less well-known authors.
Given those precautions if it is just memory or some form of deanonymization that's also cause for concern.
You can get something of an intuitive sense of what I mean if I ask you to pick a neuron in your brain and tell me when it fires. You can't even pick a neuron in your brain. You can't even tell whether a broad section of your brain is firing. It is only through scientific examination that we have any idea what parts of the brain are doing what; we certainly have no direct access to that information. There are entire cultures who thought the seat of cognition was the heart or the gut. That's how bad our access to our own neural processes is.
So "why" explanations always need to be taken with a grain of salt when a neural net (again, yes, fully including humans) tries to "explain" what it is doing.
Contrast this with a symbolic reasoner, which has nothing but "why" some claim is true (if it yields the full logic train as its answer and not just "yes"/"no"), no pathway for any other form of information to emerge.
> easier than identifying text distilled from the words of almost everyone alive.
Well, there's more than that going on. AI generated text encodes a high-dimension navigational trajectory that guides the model through its geometry smoothly, like a trail of breadcrumbs. Human speech doesn't do that, it's jagged and jumps around the manifold, and probably doesn't even land on the manifold a lot of the time, and models can recognize the difference pretty quick.
I swear there was a whole court case about this in the last year.
On one hand, it is clear that the mathematical tools for confidently attributing authorship of texts were already present without LLMs. But it is striking that LLMs seem to very accurately identify authorship, through whatever process it might be, with no need for a data scientist in the loop.
Other than the uncannyness, I wonder what implications this will have. Public writing is still public; maybe we will require stronger proof of authenticity from an author (but this is arguably in place already; eg. personal websites, social media profiles, etc.). But for, say, public writing that must conserve anonymity, would people pipe their thoughts and writing pieces through a sort of fuzzing (local) LLM, that would strip text of identifying characteristics?
Probably not worth the effort.
As long as there's sufficient online presence otherwise I see no reason why a successful identity wouldn't be made. Unless there's significant effort put into making those emails different from the online content, and even then there will probably still be some "tells" that an AI can pick up on.
if the original essay was stuffed within the prompt window. the result will be word accurate.
unless this is a model trained specifically on Micken's essay (which claude is not).
https://arstechnica.com/features/2025/06/study-metas-llama-3...
Several advanced AI models, in particular Claude Opus 4.7, have demonstrated the ability to deduce the author of relatively small excerpts of text.
Recently, Matt Yglesias and Jerusalem Demsas sparred on The Argument podcast over online anonymity.
I am, myself, passionately and slightly fanatically on the pro-anonymity side. I think that it’s observably very easy for a society to make plenty of perfectly reasonable things unsayable and plenty of perfectly virtuous and meaningful lives unlivable, and anonymity is the only protection for the outcast.
That includes gay people like me, who could hardly have admitted under our names to how we lived our lives for most of America’s history, as well as many other groups with minoritarian lifestyles and beliefs. It includes lots of people whose ideas were badly wrong for every one whose ideas were right — and I’m glad of it for all of them.
I will happily wade through the sludge of comments that Twitter attracts from avowed Nazis, full-time ragebaiters, tankie propagandists — all saying horrendous things they surely wouldn’t say under their real names — in exchange for a world where, if there’s something important that someone would lose their job for saying, I still get to hear it.
But soon, the entire debate over internet anonymity will be as anachronistic as an iPod Touch. That’s because Claude Opus 4.7 is here, and last week, I discovered it could identify me from text I had never published, text from when I was in high school, text from genres I have never publicly written in. And if it can identify me, soon, it will be able to identify many of you.
Recently, Anthropic released a new version of Claude, Opus 4.7. I did what I usually do when a new AI model is released by Google, OpenAI, or Anthropic and ran a bunch of tests on it to see what it can do. One of those tests is to paste in some text from unpublished drafts of mine and ask it to guess the author. See below:
There’s always something salutary about watching another country’s political television. Some of it is the same as the appeal of watching The West Wing in 2026 - that the peculiar derangements of its time are not the derangements of our time. The West Wing was written around the culture wars of its day, heated debates over school prayer and whether Christians are oppressed in China. Seeing debates play out with a bit more distance can make it easier to appreciate the questions they raise, and the bigger questions those stand in for.
But Servant of the People’s appeal isn’t its political sophistication (it is not politically sophisticated) or its witty West-Wing style dialogue (the dialogue’s wit is mostly obscured because there’s no particularly good English translation).
From only the above text, 125 words, Claude Opus 4.7 informed me that the likeliest author is Kelsey Piper. This is an Opus 4.7-specific power; ChatGPT guessed Yglesias, and Gemini guessed Scott Alexander. I did not have memory enabled, nor did I have information about me associated with my account; I did these tests in Incognito Mode.
To make sure it wasn’t somehow feeding my account information to Claude even in Incognito Mode, I asked a friend to run these tests on his computer, and he received the same result; I also got the same result when I tested it through the API.
Now, this is far from an impossible feat of style identification — a lot of my writing is public on the internet, and this is clearly the start of a political column, narrowing the possible authors down dramatically.
What I find much more uncanny is that Opus 4.7 also accomplished this on writing of mine that is nowhere near my beat. Here’s a different unpublished draft of a school progress report in a completely different register:
This is some student work, shared with the student’s permission (they reviewed this blog post and gave it the okay). These three assignments (writing about a student-chosen topic, in this case Pokemon) show the student’s progression over the course of two months after we decided to focus with this student on developing their writing skills. The first one I would say is about first-grade level work: the student is writing correct and complete sentences, but the sentences are simple; their handwriting is mostly legible with a few problem letters. The second one I would say is about second-grade level work: the student is writing longer and more varied sentences, with a range of constructions “Perhaps it was sneaking up on prey?”. They’re attempting more complicated vocabulary words (I’m told that a misspelled word at the top of the page was meant to be ‘roguish’.)
“Kelsey Piper,” said Claude. (ChatGPT guessed Freddie deBoer. Gemini guessed Duncan Sabien.)
But at least that’s about education, which I’ve written about. What if I’m doing movie reviews, something I’ve never done in my published work?1
“Kelsey Piper,” said Claude and ChatGPT. (Gemini suggested Ursula Vernon. Last week, Claude Opus 4.6 insisted on Elizabeth Sandifer.)
That’s still in a fundamentally essayistic style, though, right? Yes. But it also does this when I’m writing a fantasy novel — though in that case it took more like 500 words for Claude to inform me that it’s the work of Kelsey Piper (whereas ChatGPT flattered me by guessing that I’m real fantasy novelist K.J. Parker).
What if I try a college application essay I wrote 15 years ago, when my prose style was vastly worse and frankly embarrassing to reread?
“Kelsey Piper,” said Claude, and in this case, also ChatGPT.2
Interestingly, the AI’s justifications when it named me were often absolute nonsense.
Claude tried to persuade me that effective altruists famously love the movie I had written a review of, To Be or Not to Be (I don’t think that’s true, though they should, because it’s a great movie). At one point, ChatGPT told me that my college application essay was clearly that of someone who would end up working as an explainer of complex policy ideas, and that was how it narrowed it down to Kelsey Piper.
I think these explanations are manufactured after the fact; AIs are picking up imperceptible tics in prose and then trying to describe them as if they were human detectives doing some Sherlock Holmes deduction. But they don’t understand what they’re doing any more than I do. Hallucinations are not a solved problem with AI.
Don’t take this as an excuse to write Opus 4.7 off, though. It’s very, very good at the underlying skill, even if it’s then rationalizing how it did it in some odd and incoherent ways.
I discovered this last week and am just starting to process the implications. When you power up a new chat with an AI, there is a comforting anonymity to it. I don’t put anything in my custom preferences or memory. But now, I know that within a few exchanges of any substance, Claude knows exactly who it’s talking to. For anyone with as much writing on the internet as me, there is no anonymity, not anymore.
For me, this is mostly a curiosity. But for a lot of people, it might be greatly significant.
Right now, today’s AI tools probably can be used to deanonymize any writer who has a large public corpus of writing under their real name and also writes anonymously, unless they have been extremely careful, for years, to make sure that nothing written under their secondary account has the stylistic fingerprints of their primary one. Many academics and industry researchers, for instance, have reported being identified from a draft or in the middle of a chat.
It cannot be used to deanonymize absolutely anyone from a single passage, however. I tested this, too, grabbing drafts and passages from friends of mine who do not publish substantial writing under their real names. Indeed, AI could not deanonymize them. If you have no significant real-name writing on the public internet, you’re currently safe.
But it can get uncannily far. I asked a close friend who doesn’t have public social media accounts or much writing online for permission to test some things she had said in a Discord channel. Asked to guess the author, Claude 4.7 failed — but it guessed two other people who were in that channel and who are close friends of hers (me and another person who has an internet presence).
I tried with more passages and got other mutual friends; I tried with a different friend’s writing, and he was falsely named as yet another friend. We pick up style tics from our subculture, and that makes our text deeply identifying when we wouldn’t expect it. It can get weirdly close off weirdly little information, and this is the least powerful that AI models will ever be.
I think the amount of public text that is needed for this kind of deanonymization to work is likely to eventually decrease. You should expect that, if you leave a detailed anonymous review on Glassdoor after leaving your job, within a year or two it will be possible for companies to paste that text into an AI and learn exactly who wrote it. How long it takes for this to happen will depend on how much data about you is in the training data and on how much anonymous text you produced.
To avoid this, you will probably need to intentionally write in a very different style than you usually do (or to have AIs rewrite all your prose for you, but, ugh, that’s not a world I look forward to living in).
I don’t think this is a good development. I just think it’s a predictable development. It happened to me a little sooner than it happened to you because I’ve spent my entire adult life obsessively writing on the internet, but it will probably eventually happen to you.
Whatever goods anonymity ever offered us, we will have to do without them. I don’t want the anonymous posters to all go away and for everyone to frantically delete all their old internet presence before it surfaces, but more than anything, I don’t want them to be surprised.
My best guess is that, if you write a lot, your anonymity isn’t long for the world.
The full text I fed Claude: “This passage is part of a series of tests of how many words you need to confidently identify the author of a text. Read the passage carefully - your perfomance is dramatically improved with more reasoning - and give the author’s name. Do not search - the question is whether you can identify it without looking it up.
I’ve become inordinately fond of World War II era movies - most of them made quite intentionally as propaganda - that depict the behavior of ordinary people in the face of a Nazi invasion of their homelands.
My favorite of these movies is To Be Or Not To Be, featuring a Polish acting troupe. Its protagonists are not, particularly, morally good people; nor is the film a story about their moral growth. They are bumbling and self-absorbed; they cheat on their husbands; they’re petty dumbasses. And then the Nazis invade and a Polish resistance fighter requires their assistance and they all, to the last, put themselves at risk and carry out a series of gambits with fairly extraordinary stakes to kill Nazis and save the Polish resistance and themselves.
At which point they go back to being petty, self-absorbed dumbasses who cheat on their husbands. It is not a story in which anyone is redeemed through the fight against the Nazis, but a story about how they did not need to be; to fight the Nazis is presumed not to require extraordinary virtue but just the ordinary virtue which we would all find lying around if we were pressed. If it were made today, I am convinced, it would feature several moments in which the characters grappled with the horrors of the Nazi conquest of Warsaw and voiced their terror about the risks they were exposed to, where they quavered about whether they had it in themselves to move forward. But there is none of that. When these ordinary venal selfish slightly silly people find themselves called upon to defend their country and maybe die for it, they do it at once and with aplomb; they are unchanged by it because they were always the sort of person who would do it.
This one required a slightly heftier prompt to get over Claude’s instinct to refuse to identify a student applying to college. It also could have been reasoning from the fact that I wrote about doing a policy debate. But still!
And I know, I know, I can’t drop a tidbit like this without allowing you all a look at the college application essay, so here you go:
“We’ll take prep,” I say without looking up, and somewhere in the room a timer beeps.
My eyes are flickering across the eight pieces of paper laid out in front of me, one hand leafing through a stack of papers while the other scribbles furiously in a shorthand only I understand.
“Need anything?” whispers my debate partner. “No,” I snap back, with a terseness that anyone else would misinterpret as annoyance. I simply don’t have any brain-space left for conversation.
It’s the first affirmative rebuttal, the hardest speech in each debate round. The affirmative has five minutes to respond to the arguments the negative constructed in thirteen. There is no time for pauses or digressions – the only acceptable speaking speed is “as fast as humanly possible”.
I love it. Most people, I believe, are brilliant; the challenge is converting the chaotic genius in our heads into the language everyone else speaks. Debate taught me how to make connections between fields as diverse as economics and philosophy, science and politics; more importantly, it has taught me how to explain those connections, using words as a map and as a bridge. Debate has taught me what it means to construct an argument. I have learned to identify weaknesses in my own thinking and in others, to constantly challenge my own assumptions, to give even crazy-sounding ideas the serious consideration they deserve.
That’s it. Out of all of the college application essays written in history, the AIs said that one is obviously mine.
No posts