Latency is not a real issue with SSR apps, there are a bunch of solutions to place servers and data closer to your users (within a few tens of ms). Plus you can prefetch and cache content very easily without using service workers. That’s not the reason Jira or GitHub feel slow; in fact GitHub was quite fast a few years ago when all it did was render pages from the server.
In case anyone thinks this idea is serious, my strong like of vanilla HTMX came from the realizations that (i) state management can revert to the ur-web model and avoid all the complexity of browser / server state synch and (ii) I can use anything I damn well like on the server (I wrote https://harcstack.org to optimize for expediency).
[0] https://logankeenan.com/posts/a-rust-server-app-compiled-to-...
[1] https://logankeenan.com/posts/client-side-server-with-rust-a...
[2] https://github.com/richardanaya/wasm-service
It does not aim to remove js from your code, it simply adds more features to HTML by default, like making any element able to trigger an web request.
When you write a real world app with HTMX, you inevitably end up writing some js, which is totally fine.
But man, 10MB Go WASM download? That's a no go. It's not only about downloading it but executing on a clients machine over and over again. But I guess you can handle those requests perfectly fine just in service worker using pure JavaScript.
The geneal idea of HTMX is that your HTML will be rendered by the backend — à la Server Side Rendering.
To me this phrase makes no sense, what's the thought process behind this meaning of "render"? The only place HTML is "rendered" is in a browser (or in a user agent, if you prefer).Instead of a WASM backend, I used react-dom/server to generate the HTML.
https://github.com/michaelcpuckett/listleap?tab=readme-ov-fi...
Java is written in C++, but it is clearly "anti-C++" for any reasonable interpretation of the term. (Java historically replaced C++ as the most popular language, as far as I remember.)
More importantly, HTMX could have had native support without requiring an implementation in JavaScript.
https://hacks.mozilla.org/2018/01/making-webassembly-even-fa...
See also Server Side Rendering (SSR) which uses the term rendering in the same way.
It's another use of "render" relative to the server such as converting non-HTML data inside database tables, json, etc --> rendered into HTML:
https://www.google.com/search?q=SSR+server+side+rendering
Many different perspectives of "rendering":
- SSR server-side rendering : server converting data to HTML
- CSR client-side rendering : e.g. client browser fetching and converting JSON/XML into dynamic HTML
- browser-engine rendering : converting HTML to operating system windowing GUI (i.e. "painting")
The author explicitly states that he likes to write Go and that’s why he picked it in this example, which in my opinion makes this article more interesting. The main benefit is that the 'local server' within the service worker mimics the 'real server,' which effectively means you only have to write the code once.
But I generally agree that a 10MB download on first load is not something that I’d be happy to serve to users, especially to those who are using their mobile network.
One of the many dictionary definitions of the word also appears to be to "give an interpretation or rendition of" something.
You could "render" it to html with pandoc, then serve the html from disk by a static web server.
This would be "build time" html - it's html before it's read by the server.
Then you could setup a cgi script, php-script or an application server that converted markdown to html - then sent it to the client. This would be server-side rendering.
Finally, you could send some html+javascript to the client that fetch the raw markdown, then generates html on the client. That would be client side rendering.
So you'll never get Go Wasm binary sizes down to something reasonable, alas.
I know it's "whataboutism" but I thought it was pretty funny.
So you should be able to achieve pretty much anything at that point. A nice side-effect of moving to fetch.
https://m.youtube.com/watch?v=9ZhmnfKD5PE
It is not anti-SPA, but pro-hypermedia for the right problems:
https://htmx.org/essays/when-to-use-hypermedia/
htmx is a front end library of peace
fixi.js is a more minimalist take on the same idea: https://github.com/bigskysoftware/fixi
agree that htmx users are weird
”After all, both htmx and hyperscript are built in JavaScript. We couldn’t have created these libraries without JavaScript, which, whatever else one might say, has the great virtue of being there.
And we even go so far as to recommend using JavaScript for front-end scripting needs in a hypermedia-driven application, so long as you script in a hypermedia-friendly way.
Further, we wouldn’t steer someone away from using JavaScript (or TypeScript) on the server side for a hypermedia-driven application, if that language is the best option for your team. As we said earlier, JavaScript now has multiple excellent server-side runtimes and many excellent server-side libraries available.”
Such a weird question. You could ask that about any library ever.
(Some languages in particular are remarkably inflexible regarding how they want you to use them in this context.)
So seeing no real benefit. I ended up switching back to TS. I became depressed shortly afterwards, but that's probably unrelated ;)
Still, wasm game dev was a delightful experience in many respects and I would recommend it to anyone who's interested. ("Elimination of headache" is not necessarily an unambiguous good. Some headaches are genuinely worth it! Just depends on your taste and your goals.)
[0] My "favorite" bug was spending the last day of a game jam stuck on a bizarre compiler bug that would only manifest in the wasm version of the game... but I got it figured out in the end!
Why would I write React components myself when I the Javascript isn't really that complicated?
It is bizarre that ONLY HTMX gets these weird "DONT USE THAT ITS NOT POPULAR ENOUGH" criticisms.
XML, XLST get these criticisms except for the XQuery and XPath components because HTML fanatics need that to make their hybrid HTML/JS garbage apps work.
But really the ultimate goal for any good website engineer should be to offload as much logic and processing to the browser, not rewrite everything in JS just because you can.
HTMX is great. We use it as a middle ground for mildly interactive parts of the app. Places where jquery/vanilla would get annoying but going full React isn’t worth it. Admin interfaces in particular are a great fit – lots of CRUD, mildly interactive, very repetitive.
Adding `hx-get` to a button or div is way way quicker than writing all that boilerplate javascript yet again for the hundredth time.
Extra bonus: it encourages you to write small self-contained composable endpoints instead of massive kitchen-sink pages.
Why? This makes for a horrible user experience. Things like TicketMaster, and in recent years GitHub, slow my machine to a crawl sometimes. I much prefer mostly static content. This is a well-made website: https://www.compuserve.com/
Yes. Then imagine you have a massive legacy codebase and a control panel of something has a ton of data and buttons and inputs and all kinds of nonsense. Say you have a weight and dimensions of a package of a product... you'd like to make it so you can edit these in-place and when you do, a package volume (and/or volume sum of all packages) gets updated somewhere else on the page (along with some other info... I don't know, an estimate of product delivery cost based on volume, which delivery methods are now un/available etc.)
Like... you already have ways to calculate and show these in your server side logic. With HTMX you reuse these, add a sprinkle of hx-get and add some OOB updates and you're done. You can do the same with ajax, but not nearly as fast as with HTMX and much more annoyingly...
https://github.com/bigskysoftware/htmx/blob/24e6d5d7c15e34c9...
We bind `fetch` in the `ctx` object to the standard `windw.fetch` but you can replace it with whatever you want (for example in the `htmx:before:request` event triggered immediately afterwards)
Pretty much everything is swappable using this technique.
The issue is of "plain" websites for bad reasons add dynamic stuff.
There is a common refrain on the internet that things have gotten worse and are continuing to get worse. There is a proliferation of horrible jumpy loading ads on every website, every search engine throws a crappy AI summary in front of your search result, every site/webapp seems to have gotten slower and slower. I cannot provide a solution for all of that, but I can point to a better paradigm for web site and web app design. That paradigm is local first.
Local first is a design principle for web apps where the UI and data are co-located and changes to the data are synced with the remote server. Local first apps feel snappy and highly performant because they do not require a network RTT between a users action and rendering the result of the action. I recommend playing around with linear.app to experiene what a first class local first app feels like. I won’t spend much time trying to convince about bad web apps - because if you are ignorant and happy I don’t want to ruin that bliss.
If you are familiar with Jira or Github issues you should be able to immediately tell how stark of a difference a local first app can be. Jira is slow because as far as I can tell it is just slow and it loads a lot of data slowly and if you click away and then go back you have to reload all of that same data again. Github is a SSR webapp meaning that the html is generated on the server and then sent to you. This means any interaction usually requires a complete round trip betwen your broswer and the server which is usually very noticeable. Ironically Github’s slow SSR performs much better than Jira in my experience - they do different things but gosh I hate using Jira. I can only hope that some day I’ll be able to use Linear at work and it will be just as fast as it is today.
I will pause here and just clarify that almost any app architecture can end up being painfully slow if implemented poorly. I would strongly argue that most websites, webapps, etc. that we visit daily are implemented poorly. There a variety of techniques that can be employed in all these different architectures (traditional SPA, SSR, etc) but local first provides the most upside as an architecture when it relates to performance.
That was more serious than I intended it to be so lets dive into some Meme Driven Development (MDD). Let’s get into the main course of this post and talk about Local First HTMX.

HTMX is… well a meme and also possibly serious, I am not sure if anyone really knows. HTMX is an anti-javascript javascript front end framework/library (idk frontend people use those terms very loosely). More importantly it is a really good meme and that is key to MDD. So I thought I should combine HTMX and local first to create something truly awful yet beautiful. I am not necessarily recommending this approach, but I am excited to share what I’ve done to create the first Local First HTMX Todo app.
HTMX’s goal to simplify frontend development while still maintaining a good level of interactivity. The geneal idea of HTMX is that your HTML will be rendered by the backend — à la Server Side Rendering. The technical term is hypermedia as the engine of state of HATEOS. If you recall that SSR (needing a RTT to the server for every interaction) has performance issues and can cause websites to feel sluggish (it is hard to fight the speed of light). If you are just sprinkling in interactivity it can work. But and this is the key idea of Local First HTMX - you don’t have to render the HTML on the backend. You can build a “server” and compile it to WASM and run it in the browser. This would give you all the snappiness of a first class Javascript Local First SPA with none of the JS — well less of the JS. The goal is not to avoid JS but to have a simpler app.
To recap we are building a Local First HTMX app by compiling our SSR code to WASM and then running that in a service worker. Briefly and possibly incorrectly let me explain a few things about browsers. There is a main thread, this is where your JS and HTML stuff normally happens. The main thread is what has access to the DOM and can actually render content. Browsers have added many features, but I want to mention two. The first is web workers, which lets your run code in a different thread that has limited permissions (no access to the DOM). The second is a service worker - which is like a web worker but has an important disctinction. A service worker can be configured to intercept all fetch requests.
The service worker can do what it wants with them from proxying them, looking at cache, or handling the request itself. This is what I want to take advantage of - I want to proxy all fetch requests and optionally choose to render HTML and send it back.
A basic HTMX request looks something like this
<button hx-post="/clicked"
hx-trigger="click"
hx-target="#parent-div"
hx-swap="outerHTML"
>
Click Me!
</button>
Normally this would send an HTTP request to the sever, but we want to intercept this request in the service worker, handle the request and return HTML. Then in the background the service worker can sync data with the server while maintaining its local data store. In a follow up post I’ll go over the implementation details of how I did this, some issues I encountered, and then talk about some further ideas.
Stay tuned.