Now it is different in a way — I don’t have time to use them.
Pretty much 100% of projects I've done with vibe coding/engineering is in the second category. Stuff I need that either doesn't exist or exists, but is either horribly complex to configure or is a mess of 420 features even though I just need one of them.
It's a lot easier for me to implement that one specific feature just for myself than keep vigilant on an existing app's eventual scope creep as it progresses to the eventual ability to read email[0] =).
No one could have built this software but me because it’s worth nothing to others. And I couldn’t build it because it takes too long. But when I’m using an agent to code the limited resource is my attention which actually does fine so long as every free brain cycle is on a task. So these personal things are great to throw into my tab loop to occupy a free slot.
These have been wonderful times.
It doesn't help that the marketing leans heavily on anthropomorphizing LLMs either, IMHO.
Sort of writing a narrative on top live.
Unfortunately, local models are still a bit slow and weak but was interesting to see what it came up with nonetheless.
And when you inevitably get bored with it, well, you've not done much anyway. You can always get back up to speed in a month and have the LLM remind you of what it was doing.
It's actually really cool to have it work on some internal tooling and stuff while I work on my primary projects.
I'm surprised how easy it is to setup and that it can handle modestly complex planning and development flows.
And with a Claude or GPT $20 Subscription, i can do other fun things too like using it for real things (emails) or image generation.
A Mac Studio or AMD395 is neither of it. And its not just a basic setup either. I need to buy it, configure it, put it somewhere. That alone is a grand and more + a whole weekend.
I have a ML Setup with 2 4090 and 128gb of ram, its warm when i use them for finetuning or batch processes.
I do not run them for coding. Its a lot easier and nicer to play around with better models for just 20 $.
Despite coding from a young age I always thought that I cared more about the outcome than the code. Turns out that’s not entirely the case.
I'm very interested in Local LLMs but the cheapest Mac Studio right now is more expensive than 8 years of a Claude Code Pro subscription, and incomparably slower/less capable. If I get bored with it, I will have a piece of unused hardware and a couple grand less in my bank account.
Also Anthropic is by far the best, open (local) models are glorified autocomplete at best unless you casually have 20k€ worth of hardware at home.
Why does it bother me so? I have no idea.
Very usable locally assuming you setup your local tooling correctly and you are an actual programmer who can generally help drive this stuff correctly and not just a vibe coder.
Anyhoo, I'm working on making it pretty (it works!!) before integrating it into my opinionated GraphQL server[1].
There really is no excuse for NOT being the change you wish to see in the world anymore.
---
i doubt anyone is nouning "agentic" of their own accord (yet)
You can pay more of course, buy them a computer, an internet connecties, books, courses, even an office but it isnt required.
Just pay 60 per project every 4 weeks and ignore it. If interesting progress happens its fun to look at.
I got it working well enough to display what I wanted in text and ascii, but I could never get the interface good enough to want to use it daily, and certainly couldn't get the graphical interface working. I threw it a Claude Code, told it what I wanted the graphical interface to look like, and let it run.
It got an app exactly what I wanted, and even found a bug in the date parser that I hadn't noticed. I now have it running in the corner of my screen at all times.
The next app I'm going to build is an iPhone app that turns off all my morning alarms when the kids' don't have school. Something I've wanted forever, but never could build because I know nothing about making iPhone apps and don't have time to learn (because of the aforementioned children).
Claude Code is brilliant for personal apps. The code quality doesn't really matter, so you can just take what it gives you and use it.
Maybe even shamelessly post it as a Show HN along with the other 99% of worthless slop submissions there.
This trips me up occasionally when I'm translating things into English. To me using "he" is just the default pronoun when something is of an undefined gender, but when I referred to an indefinite gender player character in a gacha game as a "he", quite a few people got mad! Even though in my head I was never trying to imply one way or the other.
My partner on the otherhand has an M3 Max 64GB which I've had way more success with. Setting up opencode and doing a tiny spec-driven Rust project and watching it kiiinda work was extraordinarily exciting!
There's just no pressure to handle edge cases or write docs for people who'll never use it. Just solve exactly your problem and move on.
I've been programming for 30+ years now, but I've always been fine with command line applications. Only recently I started getting into Qt to add a UI and turn my stuff into a real desktop application. It's been a real steep learning curve but I'm finally over it more or less.
Anyway I posted a screenshot of my application on LinkedIn, and mentioned it would be free and open source. I got HUNDREDS of comments from "LinkedIn-type people" all big name engineers that wouldn't HIRE me for anything but either made comments like "looking forward to integrating this into our workflows" or "not the first time someone tried to do this..."
Either way, instead of feeling motivated, I got the worst feeling that I'm doing all this work and people are either going to just take advantage of it and get the credit for "finding" it, or criticize it simply because it's not for them.
It bummed me out so bad that I stopped work on it entirely for like a month.
Anyway I finally came to look at it the way you mentioned. What I LIKED was the process of learning Qt and seeing my old programs come alive.
So instead, it's my "project car" now. I build it up and tear it down all the time. Totally redesign the data models just to see what advantages different designs can give me. Try make my own graphical views. Try implementing language translations.
It's been "finished" for a while now but I probably have five completely different-under-the-hood versions of it and THAT is what has been fun.
I use it constantly all day at work and I never mentioned it on LinkedIn again lol
I wonder whether there could be an AI autocomplete specifically for the task of helping you with the markdown file (and collecting your thoughts and writing prompts in general). Not an agent since that wouldn't really save time, but actually an autocomplete.
Maybe a small specially-trained local model running at hyper fast speeds and which already has your project context baked in with prefix caching (with some other larger model having summarized the context beforehand to feed to the small model), so as you type this file it automatically uses the same prompt prefix over and over to suggest autocomplete which actually makes sense.
I sure hope companies double down on leetcode nonsense, because I really don’t have any capacity to compete with this level of ADHD.
Agreed.
The clipboard manager I had been using on my Macs for many years started flaking out after an OS update. The similar apps in the App Store didn’t seem to have the functionality I was looking for. So inspired by a Simon Willison blog post [1] about vibe coding SwiftUI apps, I had Claude Code create one for me. It took a few iterations to get it working, but it is now living in the menu bar of my Mac, doing everything I wanted and more.
Particularly enlightening to me was the result of my asking CC for suggestions for additional features. It gave me a long list of ideas I hadn’t considered, I chose the ones I wanted, and it implemented them.
Two days ago, I decided I wanted a dedicated markdown editor for my own use—something like the new markdown editing component in LibreOffice [2] but smaller and lighter. I asked the new GPT 5.5 to prepare an outline of such a program, and I had CC implement it. After two vibe coding sessions, I now have a lightweight native Mac app that does nearly everything I want: open and create markdown files, edit them in a word-processing-like environment, and save them with canonical markdown formatting. It doesn’t handle markdown tables yet; I’ll try to get CC to implement that feature later today.
[1] https://simonwillison.net/2026/Mar/27/vibe-coding-swiftui/
create a shortcut that turns off all alarms. Can have it read your calendar or whatever as signal to determine if alarms should be on/off for a certain day/time and have it run at a regular schedule.
I did pay the $10 for the following domains but i’m ok with that so i can share some of the fun things that come out of the agent.
grandcheaten.com - a save game editor and guide for jagged alliance 3
thedailycheat.com - a save game editor for newstower
Note: I initially drafted this before my last post on how Claude Code is getting worse. I'm putting it out now so I can reference it in a future post on OpenCode. As you can imagine my opinion on Claude Code has shifted since I wrote this.
Long ago I attempted a personal project, but never finished due to life being busy. [1] Sort of like the Japanese word Tsundoku, for the pile of books you intend to eventually read one day. We all have these projects and they are good candidates for testing out AI coding assistance. After all, they were never going to get done anyway.
The POC I put together was a shim between YouTube Music and the OpenSubsonic api. Explaining OpenSubsonic could be its own article, but for our purposes it's an API contract for nicely decoupling music streaming clients and servers. You can pick your own options for both. In my case I like Navidrome for the server, Feishin for desktop, and as I mentioned in my post on GrapheneOS, Symfonium for Android.
Anyways, the shim made YouTube Music conform to the API so I could add it to any of my clients. Under the hood I used ytmusicapi for metadata lookup and programmatically called yt-dlp to stream the music. Getting basic streaming working was pretty simple. However, there was a long tail implementing all the endpoints in a conformant way. Then as always, there were new shiny projects that stole my attention away. Like that embedded rust location project I promise I'll finish at some point. Maybe.
Luckily, nothing was really novel in that streaming project, and there is a clear spec to implement which is perfect for assisted coding. So a month and a half ago I thought I would test Claude Code with Opus 4.6 and see how it did implementing the project from scratch. After all, they gave me a free $50 in credit, so I might as well.
Since I had already written a proof of concept by hand, I had my own opinions about the implementation and laying all of that out beforehand constrained the tool in a nice way.
I did the following:
Created a uv project with fastapi, pydantic, ytmusicapi and yt-dlp as dependencies.
Changed main.py to the example FastAPI main file.
Dropped the openapi spec for OpenSubsonic in the folder.
Added a brief description in a readme file:
This project acts as a shim, exposing YouTube music as an opensubsonic client. It uses fastapi for its server with pydantic, ytmusicapi for metadata and yt-dlp for streaming."
opensubsonic docs are available at: https://example.docsy.dev/docs/reference/ The openapi spec is in openapi.json.
Added an empty TODO file.
Generated a CLAUDE.md file using /init.
I also often add a section like this to the CLAUDE.md file:
## Conventions
- Methods should have type annotations for args and returns as well as docstrings.
- Use Pydantic for data modeling. Use modern Pydantic V2 conventions.
- Doc strings should use the Google style format with an args and returns sections.
- Write unit tests with modern pytest style, eg top level methods using `assert` and fixtures.
That's mostly based on past experience for what I have to repeatedly ask Claude Code not to do.
I've bundled up this starting point into a git repository in case anyone else wants to try the experiment.
With that setup done, I let Claude kick things off. The workflow I typically use is:
The first prompt I used was:
Have a look at the openapi.json file. This is a spec for the opensubsonic api. Implement an async fastapi server that stubs out all of the methods. There are both older xml endpoints and newer style json endpoints. You only need to handle the newer json endpoints.
For this kind of change I like to clear context after implementing and then ask a follow up question:
I implemented stubbed versions of all the methods specified in openapi.json. Double-check they are correct.
Even with a spec, Claude Code makes mistakes the first time, but then will catch them (mostly) the second time through.
Also, after implementing larger changes, I like to re-run /init to update the CLAUDE.md file to cover the new pieces.
The next major prompt was:
The methods for all endpoints are stubbed out now. I want to connect a subsonic client, search for a song, and stream it to the client. What is the minimum amount of functionality needed to implement that? Use ytmusicapi for searching YouTube music and yt-dlp for streaming.
I got an implementation that looked reasonable pretty quickly, but fell over when trying to actually connect with Feishin. At that point I iterated by testing the client and handing the server request logs to Claude Code. Even with a spec there are details that are not spelled out clearly, like how endpoints may have a .view suffix that needs to be stripped. Every time there was an error I generated new unit tests to cover them.
I was shocked to hear the audio streaming through feishin after only a couple of iterations. The main issues involved stubbed endpoints returning nothing. They mostly had to be updated to return empty, but correctly structured responses.
Just getting an MVP is the easy part though. Not that far beyond what I implemented in my POC.
The rest of the work was the less interesting, more drudgery parts to make the project actually usable. From the docs, OpenSubsonic has ~80 endpoints spread over 15 different categories.
For the MVP use case I only had to support:
asyncio.to_thread to extract the URL for the "bestaudio" format.To support the full functionality of a subsonic client I:
I knew all these things had to be done to make my own POC more usable, and I could have done them, but never did. At the same time, since I never planned to release anything I absolutely skipped the hard bits around authentication.
All together I was able to get a working service that I could connect to from a subsonic client in a short evening. In the end I dubbed the project "Sub-standard".
I don't want to sound like an AI coding assist booster. I still have fears around deskilling from relying on these tools too much. That's why I still bang my head against the wall trying to learn Rust.
In my mind there are different buckets for personal projects. One is things I do to learn and grow and the other is things I really wish existed. [2] This kind of project falls into the second bucket. Using AI coding assist to reify those projects is sort of a form of wish fulfillment. I never would have gotten to it, but now I can have the project. One less metaphorical book sitting unread on bookshelf.
In the end I think the important thing is not whether you are doing projects in bucket 2, but whether you are also still doing the stretch projects in bucket 1.
All the personal tools described in this thread are duct tape and bubblegum under the hood and nowhere near productionizable. That's what Claude Code makes for you.
The whole point is that for personal tools, code quality never really mattered since it's never going to be exposed to the public or be iterated upon by a revolving door team of devs like real software products. These are all highly overfitted tools that shave off like 15 seconds of time in the day for some particular person.
It's almost exactly like having a 3D printer for software, with exactly the kind of quality that a present-day 3D printer gives you.
(But in seriousness, I hadn't considered using shortcuts. It's not clear it's extensible enough to do exactly what I want, but I'll look into it)
It's not well-known, but Itch's offline Steam equivalent (<https://itch.io/app>) is also open source.
Why do you think that? I do regular ol' coding at my day job and have been vibe coding some side projects. They both require using my brain and both require my input for something to be created. They are different, though.
> Instead all you're doing is creating more cheap mediocre throwaway crap just because you can.
It probably is these things but since I'm just building stuff for myself, it hardly matters.
I've written a lot of code and a lot of that has been doing roughly the same thing. It's not a mental challenge; it's a chore. Sometimes it is really gratifying to code and try to figure stuff out. Often times it is not. So when it comes to building something in my free time, I'd prefer to avoid that sort of mental friction and banal tasks just to start working on the actual problem. More so than that, I'm building tools for myself to make my life easier so I can spend it more on something else.
I ride an electric bike with pedal assist. Does that mean I'm not really bicycling? Some might say yes and that it defeats the point. To me it ensures that I pick the bicycle more because it reduces friction to do so. I know that if I encounter a hill that the pedal assist will help me up it and thus I use it more and the net benefit outweighs the downsides. I think it's the same thing here.
I don't take pride in the work that an LLM does for me but I will happily benefit from it. It's a tool.
If you really want to engage an LLM to help point it towards Cherri (https://github.com/electrikmilk/cherri) to help with implementation
And just like with bikes, people who take pride in doing things the hard way can continue to do so. And they shouldn't belittle people who choose to use assistance.
If you like creating, buying software from Anthropic is boring as hell.
LLM is not a runtime. It might be something akin to non deterministic compiler where it converts your MD to code.
It leaves more room for skill expression when you're making architectural decisions, defining scope, and designing the application.