I've written my whole lifestory, the parts I'm willing to share that is, and posted it in Claude. It helped me way better with all kinds of things. It took me 2 days to write without formatting, pretty much how I write all my HN comments (but then 2 days straight: eat, sleep, write).
I've also exported all my notes, but it's too big for the context. That's why I wrote my life story.
From a practical standpoint I think the focus is on context management. Obsidian can help with this (I haven't used it so don't know the details). For code, it means doing things like static and dynamic analysis to see which functions calls what and create a topology of function calls and send that as context, then Claude Code can more easily know what to edit, and it doesn't need to read all the code.
Still, all credit to him for creating that asset in the first place.
I'd like to be able to point a model at a news story and have it follow every fact and claim back to an origin, (or lack of one). I'm not sure when they will be able to do that, they aren't up to the task yet. Reading the news would be so much different if you could separate the 'we report this happened' from the 'we report that someone else reported this happened"
I have a written novel draft and something like a million words of draft fiction but have struggled with how to get meaningful analytics from it.
I would love to try this out but don’t feel comfortable sharing all my personal notes with a third party.
Well for most humans that's the more super of the powers too ;)
I opened Claude Code in the repo and asked it to tell me about myself based on my writing.
Claude's answer overestimated my technical skills (I take notes on stuff I don't know, not on things I know, so it assumed that I had deep expertise in things I'm currently learning, and ignored areas where I do have a fair amount of experience), but the personal side really resonated with me.
And therefore it's impossible to test the accuracy if it's consuming your own data. AI can hallucinate on any data you feed it, and it's been proven that it doesn't summarize, but rather abridges and abbreviates data.
In the authors example
> "What patterns emerge from my last 50 one-on-ones?" AI found that performance issues always preceded tool complaints by 2-3 weeks. I'd never connected those dots.
Maybe that's a pattern from 50 one-on-ones. Or maybe it's only in the first two and the last one.
I'd be wary of using AI to summarize like this and expecting accurate insights
How are you guys dealing with this risk? I'm sure on this site nobody is naive to the potential harms of tech, but if you're able to articulate how you've figured out that the risk is worth the benefits to you I'd love to hear it. I don't think I'm being to cynical to wait for either local LLMs to get good or for me to be able to afford expensive GPUs for current local LLMs, but maybe I should be time-discounting a bit harder?
I'm happy to elaborate on why I find it dangerous, too, if this is too vague. Just really would like to have a more nuanced opinion here.
I take notes for remembrance and relevance (what is interesting for me). But linking concepts is all my thinking. Doing whatever rhe article is prescribing is like sending someone on a tourist trip to take pictures and then bragging that you visited the country. While knowing that some pictures are photoshopped.
The AI summary at the top was surprisingly good! Of course, the AI isn't doing anything original; instead, it created a summary of whatever written material is already out there. Which is exactly what I wanted.
Do you have more resources on that? I'd love to read about the methodology.
> And therefore it's impossible to test the accuracy if it's consuming your own data.
Isn't it only if it's hard to verify the result? If it's a result that's hard to produce but easy to verify, a class which many problems fall into, you'd just need to look at the synthetized results.
If you ask it "given these arbitrary metrics, what is the best business plan for my company?" It'd be really hard to verify the result. I'd be hard to verify the result from anyone for that matter, even specialists.
So I think it's less about expecting the LLM to do autonomous work and more about using LLMs to more efficiently help you search the latent space for interesting correlations, so that you and not the LLM come up with the insights.
Sure, but when do you have accurate results when using an iterative process? It can happen at the beginning or at the end when you’re bored, or have exhausted your powers of interrogation. Nevertheless, your reasoning will tell you if the AI result is good, great, acceptable, or trash.
For example, you can ask Chat—Summarize all 50 with names, dates and 2-3 sentence summaries and 2-3 pull quotes. Which can be sufficient to jog your memory, and therefore validate or invalidate the Chat conclusion.
That’s the tool, and its accuracy is still TBD. I for one am not ready to blindly trust our AI overlords, but darn if a talking dog isn’t worth my time if it can make an argument with me.
And rightfully so. I've been looking at local LLMs because of that and they are slowly getting there. They will not be as "smart" as the big models, but even a 30B model (which you can easily run on a modern Macbook!) can do some summarization.
I just hope software for this will start getting better, because at the moment there is a plethora of apps, none of which are easy to use or even work with a larger number of documents.
If it was not fun for me, I would not have bought 3 GPUs just to run better local LLMs. Actual time, effort and money spent on my local setup compared to the value I get does not justify it at all. For 99% of the things I do I could have just used an API and paid like $17 in total. Though it would not have been as fun. For the other 1% I could have just rented some machine in cloud and ran LLMs there.
If you don't have your private crypto keys in your notes worth millions, but still worry about your privacy, I'd recommend just renting a machine/GPU in a smaller cloud provider (not the big 3 or 5) and do these kind of things there.
So yea, I definitely, add to the "AI generated" text part but I read over all the texts, and usually they don't get sent out. Ultimately, it's still a lot quicker to do it this way.
For career planning, so far it hasn't beaten my own insights but it came close. For example, it mentioned that I should actually be a developer advocate instead of a software engineer. 2 to 3 years ago I came to that same thought. I ultimately rejected the idea due to how I am but it is a good one to think about.
What I see now, I think the best job for me would be a tech consultant. Or as I'd also like to call it: a data analyst that spots problems and then uses his software engineering or teaching skills to solve that problem. I don't think that job has a good catch all title as it is a pretty generalist job. I'm currently at a company that allows me to do this but the pay is quite low, so I'm looking for a tech company where I could do something similar. Maybe a product manager role? It really depends on the company culture.
What I also noticed it did better: it doesn't reduce me to data engineering anymore. It understands that I aspire to learn everything and anything I can get my hands on. It's my mode of living and Claude understands that.
So nothing too spectacular yet, but it'll come. It requires more prompt/context engineering and fine tuning of certain things. I didn't get around to it yet.
This does mean that, useful as e.g. Claude Code is, for any business with NDA-type obligations, I don't think I could recommend it over a locally hosted model, even though the machine needed to run a decent local model might cost €10k (with current price increases due to demand exceeding supply), that the machine is still slower than what hosts the hosted models, that the rapid rate of improvement means a 3-month delay between SOTA in open-weights and private-weights is enough to matter*.
But until then? If I'm vibe coding a video game I'd give away for free anyway, or copy-editing a blog post that's public anyway, or using it to help with some short stories that I'd never be able to charge money for, or uploading pictures of the plants in my garden right by the public road… that's fine.
* When the music (money for training) stops, it could be just about any provider whose model is best, whatever that is is likely to still get distilled down fairly cheaply and/or some 3-month-old open-weights model is likely to get fine-tuned for each task fairly cheaply; independently of this, without the hyper-scalers the supply chains may shift back from DCs to PCs and make local models much more affordable.
That's fortunate as uploading them to a LLM was you leaking them.
This is specific, but if you start replying to LLM summaries of emails, instead of reading and responding to the content of the email itself, you are quickly going to become a burden socially.
The people you are responding to __will__ be able to tell, and will dislike you for your lack of consideration.
In general for chat platforms you're right though, uploading/copy-pasting long documents and asking the LLM to find not one, but multiple needles in a haystack tend to give you really poor results. You need a workflow/process for getting accuracy for those sort of tasks.
This isn't strictly a case against AI, just a case that we have a contradiction on the definition of "well informed". We value over-consumption, to the point where we see learning 3 things in 5 minutes as better than learning 1 thing in 5 minutes, even if that means being fully unable to defend or counterpoint what we just read.
I'm speficially referring to what you said: "the speaker used some obscure technical terminology I didn't know" this is due to lack of assumed background knowledge, which makes it hard to verify a summary on your own.
I looked up a medical term, that is frequently misused (eg. "retarded"), and asked the Gemini to compare it with similar conditions.
Because I have enough of a background in the subject matter, I could tell what it had construed by its mixing the many incorrect references with the much fewer correct references in the training data.
I asked it for sources, and it failed to provide anything useful. But once I am looking at sources, I would be MUCH better off searching and only reading the sources might actually be useful.
I was sitting with a medical professional at the time (who is not also a programmer) and he completely swallowed what Gemini was feeding him. He commented that he appreciates that these summaries let him know when he is not up to date with the latest advances, and he learnt alot from the response.
As an aside, I am not sure I appreciate that Google's profile would now associate me with that particular condition.
Scary!
I'm really glad you are getting some personal growth out of these tools, but I hesitate to give Claude as much credit as you do. And I'm really cautious about saying Claude "understands" because that word has many meanings and it isn't clear which ones apply hear.
What I'm hearing is that you use it like a kind of rubber-duck debugger. Except this is a special rubber duck because it can replay/rephrase what you said.
And after that? What's next?
> instant and total recall of our thoughts/notes/experiences
Closest is with vector searches & RAG etc., but even that isn't total recall because it will misclassify stuff with current SOTA.
Throwing everything in a pile and hoping an LLM will sort it all out for you, is at present even more limited.
They're good, sure, but you're overstating them.
Text is a very linear medium. It's just the spark while our wealth of experiences is the fuel. No amount of wrangling the word "pain" will compare to actually experiencing it.
You'll better be served by just having a space repetition system for the notes you've taken. In this way, you'll be reminded of the whole experience when you took the note instead of reading words that were never written by someone who have lived.
So someone who wants a war or wants Tweedledum to get more votes than Tweedledee has incentives to poison the well and disseminate fake content that makes it into the training set. Then there's a whole department of "safety" that has to manually untrain it to not be politically incorrect, racist etc. Because the whole thesis is don't think for yourself, let the AI think for you.
The 3 things in 5 minutes is even worse - it’s like taking Google Maps everywhere without even thinking about how to get from point A to point B - the odds of knowing anything at all from that are near zero.
And since it summarizes the original content, it’s an even bigger issue - we never even have contact with the thing we’re putatively learning from, so it’s even harder to tell bullshit from reality.
It’s like we never even drove the directions Google Maps was giving us.
We’re going to end up with a huge number of extremely disconnected and useless people, who all absolutely insist they know things and can do stuff. :s
If you ask Google about news, world history, pop culture, current events, places of interest, etc., it will lie to you frequently and confidently. In these cases, the "low quality summary" is very often a completely idiotic and inane fabrication.
It's true. I previously had no idea of the proper number of rocks to eat, but thanks to a notorious summary (https://www.bbc.com/news/articles/cd11gzejgz4o) I have all the rock-eating knowledge I need.
Most of "AI"s superpower is tricking monkeys into anthropomorphizing it. It's just a giant, complicated, expensive, environmentally destructive math computer with no capability to create novel thought. If it did have one superpower it's gaslighting and manipulation.
October 30, 2025
Everyone's using AI wrong. Including me, until last month.
We ask AI to write emails, generate reports, create content. But that's like using a supercomputer as a typewriter. The real breakthrough happened when I flipped my entire approach.
AI's superpower isn't creation. It's consumption.
Here's how most people use AI:
Makes sense. These tasks save time. But they're thinking too small.
My Obsidian vault contains: → 3 years of daily engineering notes → 500+ meeting reflections → Thousands of fleeting observations about building software → Every book highlight and conference insight I've captured
No human could read all of this in a lifetime. AI consumes it in seconds.
Last month I connected my Obsidian vault to AI. The questions changed completely:
Instead of "Write me something new" I ask "What have I already discovered?"
Real examples from this week:
"What patterns emerge from my last 50 one-on-ones?" AI found that performance issues always preceded tool complaints by 2-3 weeks. I'd never connected those dots.
"How has my thinking about technical debt evolved?" Turns out I went from seeing it as "things to fix" to "information about system evolution" around March 2023. Forgotten paradigm shift.
"Find connections between Buffer's API design and my carpeta.app architecture" Surfaced 12 design decisions I'm unconsciously repeating. Some good. Some I need to rethink.
Every meeting, every shower thought, every debugging session teaches you something. But that knowledge is worthless if you can't retrieve it.
Traditional search fails because you need to remember exact words. Your brain fails because it wasn't designed to store everything.
AI changes the retrieval game: → Query by concept, not keywords → Find patterns across years, not just documents → Connect ideas that were separated by time and context
The constraint was never writing. Humans are already good at creating when they have the right inputs.
The constraint was always consumption. Reading everything. Remembering everything. Connecting everything.
My setup is deceptively simple:
But the magic isn't in the tools. It's in the mindset shift.
Stop thinking of AI as a creator. Start thinking of it as the ultimate reader of your experience.
Every note becomes a future insight. Every reflection becomes searchable wisdom. Every random observation might be the missing piece for tomorrow's problem.
After two months of this approach:
→ I solve problems faster by finding similar past situations → I make better decisions by accessing forgotten context → I see patterns that were invisible when scattered across time
Your experience is your competitive advantage. But only if you can access it.
Most people are sitting on goldmines of insight, locked away in notebooks, random files, and fading memories. AI turns that locked vault into a queryable database of your own expertise.
We're still thinking about AI like it's 2023. Writing assistants. Code generators. Content creators.
The real revolution is AI as the reader of everything you've ever thought.
And that changes everything about how we should capture knowledge today.
Start documenting. Not for others. For your future self and the AI that will help you remember what you've forgotten you know.
This piece originally appeared in my weekly newsletter. Subscribe for insights on thinking differently about work, technology, and what's actually possible.