What would really change perception is not just better benchmarks, but making the boring path easy: compile with the normal toolchain, import a Web API naturally, and not have to become a part-time binding engineer to build an ordinary web app.
The 45% overhead reduction in the Dodrio experiment by skipping the JS glue is massive. But I'm curious about the memory management implications of the WebAssembly Component Model when interacting directly with Web APIs like the DOM.
If a Wasm Component bypasses JS entirely to manipulate the DOM, how does the garbage collection boundary work? Does the Component Model rely on the recently added Wasm GC proposal to keep DOM references alive, or does it still implicitly trigger the JS engine's garbage collector under the hood?
Really excited to see this standardize so we can finally treat Wasm as a true first-class citizen.
WRT WebAssembly Components though, I do wish they'd have gone with a different name, as its definition becomes cloudy when Web Components exist, which have a very different purpose. Group naming for open source is unfortunately, very hard. Everyone has different usages of words and understanding of the wider terms being used, so this kind of overlap happens often.
I'd be curious if this will get better with LLM overseers of specs, who have wider view of the overall ecosystem.
Example subsets:
- (mainly textual) information sharing
- media sharing
- application sharing with, small standard interface like WASI 2 or better yet including some graphics
- complex application sharing with networking
Smaller subsets of the giant web API would make for a better security situation and most importantly make it feasible for small groups to build out "browser" alternatives for information sharing, media or application sharing.
This is likely to not be pursued though because the extreme size of the web API (and CSS etc.) is one of the main things that protects browser monopolies.
Even further, create a standard webassembly registry and maybe allow people to easily combine components without necessarily implementing full subsets.
Do webassembly components track all of their dependencies? Will they assume some giant monolithic API like the DOM will be available?
What you're doing is essentially creating a distributed operating system definition (which is what the web essentially is). It can be designed in such a way that people can create clients for it without implementing massive APIs themselves.
> There are multiple reasons for this, but the core issue is that WebAssembly is a second-class language on the web
It would be nice if WebAssembly would really succeed, but I have to be honest: I gave up thinking that it ever will. Too many things are unsolved here. HTML, CSS and JavaScript were a success story. WebAssembly is not; it is a niche thing and getting out of that niche is now super-hard.
(though i do like the open code nature of the internet even if a lot of the javascript source code is unreadable and/or obfuscated)
Possibly disabled now as they announced VBScript would be disabled in 2019.
Better late than never I guess.
[1] https://github.com/WebAssembly/interface-types/commit/f8ba0d...
[2] https://wingolog.org/archives/2023/10/19/requiem-for-a-strin...
The difference in perf without glue is crazy. But not surprising at all. This is one of the things I almost always warn people about, because it's such a glaring foot gun when trying to do cool stuff with WASM.
The thing with components that might be addressed (maybe I missed it) is how we'd avoid introducing new complexity with them. Looking through the various examples of implementing them with different languages, I get a little spooked by how messy I can see this becoming. Given that these are early days and there's no clearly defined standard, I guess it's fair that things aren't tightened up yet.
The go example (https://component-model.bytecodealliance.org/language-suppor...) is kind of insane once you generate the files. For the consumer the experience should be better, but as a component developer, I'd hope the tooling and outputs were eventually far easier to reason about. And this is a happy path, without any kind of DOM glue or interaction with Web APIs. How complex will that get?
I suppose I could sum up the concern as shifting complexity rather than eliminating it.
https://github.com/WebAssembly/component-model/blob/main/des...
From the code sample, it looks like this proposal also lets you load WASM code synchronously. If so, that would address one issue I've run into when trying to replace JS code with WASM: the ability to load and run code synchronously, during page load. Currently WASM code can only be loaded async.
And now that we're getting close to have the right design principles and mitigations in place and 0-days in JS engines are getting expensive and rare... we're set on ripping it all out and replacing it with a new and even riskier execution paradigm.
I'm not mad, it's kind of beautiful.
JavaScript is the right abstraction for running untrusted apps in a browser.
WebAssembly is the wrong abstraction for running untrusted apps in a browser.
Browser engines evolve independently of one another, and the same web app must be able to run in many versions of the same browser and also in different browsers. Dynamic typing is ideal for this. JavaScript has dynamic typing.
Browser engines deal in objects. Each part of the web page is an object. JavaScript is object oriented.
WebAssembly is statically typed and its most fundamental abstraction is linear memory. It's a poor fit for the web.
Sure, modern WebAssembly has GC'd objects, but that breaks WebAssembly's main feature: the ability to have native compilers target it.
I think WebAssembly is doomed to be a second-class citizen on the web indefinitely.
If it gets stuck as a second-class citizen like you're predicting, it sounds a lot more like it's due to inflexibility to consider alternatives than anything objectively better about JavaScript.
I've created a proposal to add a fine-grained JIT interface: https://github.com/webassembly/jit-interface
It allows generating new code one function at a time and a robust way to control what the new code can access within the generating module.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
> WebAssembly is the wrong abstraction for running untrusted apps in a browser
WebAssembly is a better fit for a platform running untrusted apps than JS. WebAssembly has a sandbox and was designed for untrusted code. It's almost impossible to statically reason about JS code, and so browsers need a ton of error prone dynamic security infrastructure to protect themselves from guest JS code.
> Browser engines evolve independently of one another, and the same web app must be able to run in many versions of the same browser and also in different browsers. Dynamic typing is ideal for this. JavaScript has dynamic typing.
There are dynamic languages, like JS/Python that can compile to wasm. Also I don't see how dynamic typing is required to have API evolution and compt. Plenty of platforms have static typed languages and evolve their API's in backwards compatible ways.
> Browser engines deal in objects. Each part of the web page is an object. JavaScript is object oriented
The first major language for WebAssembly was C++, which is object oriented.
To be fair, there are a lot of challenges to making WebAssembly first class on the Web. I just don't think these issues get to the heart of the problem.
(I'm not a fan of the WASM component model either, but your generalized points are mostly just wrong)
For end users, they should just see their language's native concurrency primitives (if any). So if you're running Go, it'll be go routines. JS, would use promises. Rust, would have Futures.
1. Support non-Web API's
2. Support limited cross language interop
WebIDL is the union of JS and Web API's, and while expressive, has many concepts that conflict with those goals. Component interfaces take more of an intersection approach that isn't as expressive, but is much more portable.I personally have always cared about DOM access, but the Wasm CG has been really busy with higher priority things. Writing this post was sort of a way to say that at least some people haven't forgotten about this, and still plan on working on this.
I think you may be confusing Javascript the language, with browser APIs. Javascript itself is not insecure and hasn't been for a very long time, it's typically the things it interfaces with that cause the security holes. Quite a lot of people still seem to confuse Javascript with the rest of the stuff around it, like DOM, browser APIs, etc.
My points are validated by the reality that most of the web is JavaScript, to the point that you'd have a hard time observing degradation of experience if you disabled the wasm engine.
(there was also some more recent discussion in here: https://news.ycombinator.com/item?id=47295837)
E.g. it feels like a lot of over-engineering just to get 2x faster string marshalling, and this is only important for exactly one use case: for creating a 1:1 mapping of the DOM API to WASM. Most other web APIs are by far not as 'granular' and string heavy as the DOM.
E.g. if I mainly work with web APIs like WebGL2, WebGPU or WebAudio I seriously doubt that the component model approach will cause a 2x speedup, the time spent in the JS shim is already negligible compared to the time spent inside the API implementations, and I don't see how the component model can help with the actually serious problems (like WebGPU mapping GPU buffers into separate ArrayBuffer objects which need to be copied in and out of the WASM heap).
It would be nice to see some benchmarks for WebGL2 and WebGPU with tens-of-thousands of draw calls, I seriously doubt there will be any significant speedup.
Where I think the argument goes wrong is in treating "most websites don't use WASM" as evidence that WASM is a bad fit for the web. Most websites also don't use WebGL, WebAudio, or SharedArrayBuffer. The web isn't one thing. There's a huge population of sites that are essentially documents with some interactivity, and JS is obviously correct for those. Then there's a smaller but economically significant set of applications (Figma, Google Earth, Photoshop, game engines) where WASM is already the only viable path because JS can't get close on compute performance.
The component model proposal isn't trying to replace JS for the document-web. It's trying to lower the cost of the glue layer for that second category of application, where today you end up maintaining a parallel JS shim that does nothing but shuttle data across the boundary. Whether the component model is the right design for that is a fair question. But "JS is the right abstraction" and "WASM is the wrong abstraction" aren't really in tension, because they're serving different parts of the same platform.
The analogy I'd reach for is GPU compute. Nobody argues that shaders should replace CPU code for most application logic, but that doesn't make the GPU a "dud" or a second-class citizen. It means the platform has two execution models optimized for different workloads, and the interesting engineering problem is making the boundary between them less painful.
So does JavaScript.
> It's almost impossible to statically reason about JS code, and so browsers need a ton of error prone dynamic security infrastructure to protect themselves from guest JS code.
They have that infrastructure because JS has access to the browser's API.
If you tried to redesign all of the web APIs in a way that exposes them to WebAssembly, you'd have an even harder time than exposing those APIs to JS, because:
- You'd still have all of the security troubles. The security troubles come from having to expose API that can be called adversarially and can pass you adversarial data.
- You'd also have the impedence mismatch that the browser is reasoning in terms of objects in a DOM, and WebAssembly is a bunch of integers.
> There are dynamic languages, like JS/Python that can compile to wasm.
If you compile them to linear memory wasm instead of just running directly in JS then you lose the ability to do coordinated garbage collection with the DOM.
If you compile them to GC wasm instead of running directly in JS then you're just adding unnecessary overheads for no upside.
> Also I don't see how dynamic typing is required to have API evolution and compt.
Because for example if a browser changes the type of something that happens to be unused, or removes something that happens to be unused, it only breaks actual users at time of use, not potential users at time of load.
> Plenty of platforms have static typed languages and evolve their API's in backwards compatible ways.
We're talking about the browser, which is a particular platform. Not all platforms are the same.
The largest comparable platform is OSes based on C ABI, which rely on a "kind" of dynamic typing (stringly typed, basically - function names in a global namespace plus argument passing ABIs that allow you to mismatch function signature and get away with it.
> The first major language for WebAssembly was C++, which is object oriented.
But the object orientation is lost once you compile to wasm. Wasm's object model when you compile C++ to it is an array of bytes.
> To be fair, there are a lot of challenges to making WebAssembly first class on the Web. I just don't think these issues get to the heart of the problem.
Then what's your excuse for why wasm, despite years of investment, is a dud on the web?
I mean, surely it does not come to a surprise to anyone that either of these is a huge deal, let alone both. It seems clear that non-Web runtimes have had a huge influence on the development priorities of WebAssembly—not inherently a bad thing but in this case it came at the expense of the actual Web.
> WebIDL is the union of JS and Web API's, and while expressive, has many concepts that conflict with those goals.
Yes, another part of the problem, unrelated to the WIT story, seems to have been the abandonment of the idea that <script> could be something other than JavaScript and that the APIs should try to accomodate that, which had endured for a good while based on pure idealism. That sure would have come useful here when other languages became relevant again.
(Now with the amputation of XSLT as the final straw, it is truly difficult to feel any sort of idealism from the browser side, even if in reality some of the developers likely retain it. Thank you for caring and persisting in this instance.)
Even more to the point, for the past couple of decades the browser's programming model has just been "write JavaScript". Of course it's going to fit JavaScript better than something else right now! That's an emergent property though, not something inherent about the web in the abstract.
There's an argument to be made that we shouldn't bother trying to change this, but it's not the same as arguing that the web can't possibly evolve to support other things as well. In other words, the current model for web programming we have is a local optimum, but statements like the the one at the root of this comment chain talk like it's a global one, and I don't think that's self-evident. Without addressing whether they're opposed to the concept or the amount of work it would take, it's hard to have a meaningful discussion.
That is a useful benefit, not the only benefit. I think the biggest benefit is not needing glue, which means languages don't need to agree on any common set of JS glue, they can just directly talk DOM.
And besides performance, I think there are developer experience improvements we could get with native wasm component support (problems 1-3). TBH, I think developer experience is one of the most important things to improve for wasm right now. It's just so hard to get started or integrate with existing code. Once you've learned the tricks, you're fine. But we really shouldn't be requiring everyone to become an expert to benefit from wasm.
I don't understand this objection. If you compile code that doesn't call a function, and then put that artifact on a server and send it to a browser, how is it broken when that function is removed?
- https://floooh.github.io/tiny8bit/
- https://floooh.github.io/sokol-webgpu/
- https://floooh.github.io/visualz80remix/
- https://floooh.github.io/doom-sokol/
All those projects also compile into native Windows/Linux/macOS/Android/iOS executables without any code changes, but compiling to WASM and running in web browsers is the most painless way to get this stuff to users.
Dealing with minor differences of web APIs in different browsers is a rare thing and can be dealt with in WASM just the same as in JS: a simple if-else will do the job, no dynamic type system needed (apart from that, WASM doesn't have a "type system" in the first place, just like CPU instruction sets don't have one - unless you count integer and float types as type system"). Alternatively it's trivial to call out into Javascript. In Emscripten you can even mix C/C++ and Javascript in the same source file.
E.g. for me, WASM is already a '1st class citizen of the web' no WASM component model needed.
Being able to complete on efficiency with native apps is an incredible example of purposeful vision driving a significant standard, exactly the kind of thing I want for the future of the web and an example of why we need more stewards like Mozilla.
Language portability is a big feature. There's a lot of code that's not JS out there. And JS isn't a great compilation target for a lot of languages. Google switched to compiling Java to Wasm-GC instead of JS and got a lot of memory/speed improvements.
> Because for example if a browser changes the type of something that happens to be unused, or removes something that happens to be unused, it only breaks actual users at time of use, not potential users at time of load. > The largest comparable platform is OSes based on C ABI, which rely on a "kind" of dynamic typing (stringly typed, basically - function names in a global namespace plus argument passing ABIs that allow you to mismatch function signature and get away with it.
I don't think any Web API exposed directly to Wasm would have a single fixed ABI for that reason. We'd need to have the user request a type signature (through the import), and have the browser maximally try and satisfy the import using coercions that respect API evolution and compat. This is what Web IDL/JS does, and I don't see why we couldn't have that in Wasm too.
> Then what's your excuse for why wasm, despite years of investment, is a dud on the web?
Wasm is not a dud on the web. Almost 6% of page loads use wasm [1]. It's used in a bunch of major applications and libraries.
[1] https://chromestatus.com/metrics/feature/timeline/popularity...
I still think we can do better though. Wasm is way too complicated to use today. So users of wasm today are experts who either (a) really need the performance or (b) really need cross platform code. So much that they're willing to put up with the rough edges.
And so far, most investment has been to improve the performance or bootstrap new languages. Which is great, but if the devex isn't improved, there won't be mass adoption.
It's not really a dud on the web. It sees a ton of use in bringing heavier experiences to the browser (i.e Figma, the Unity player, and so on).
Where it is currently fairly painful is in writing traditional websites, given all the glue code required to interact with the DOM - exactly what these folks are trying to solve.
I think most languages could pretty easily use WASM GC. The main issue comes around FFI. That's where things get nasty.
Performance is already as good as it gets for "raw" WASM, the proposed component model integration will only help when trying to use the DOM API from WASM. But I think there must be less complex solutions to accelerate this specific use case.
Taking this argument to its extreme, does this mean that introducing new technology always decreases technology? Because even if the technology would be more secure, just the fact that it's new makes it less secure in your mind, so then the only favorable move is to never adopt anything new?
Supposedly you have to be aware of some inherent weakness in WASM to feel like it isn't worth introducing, otherwise shouldn't we try to adopt more safe and secure technologies?
It's a big feature of JS. JS's dynamism makes it super easy to target for basically any language.
> Google switched to compiling Java to Wasm-GC instead of JS and got a lot of memory/speed improvements.
That's cool. But that's one giant player getting success out of a project that likely required massive investment and codesign with their browser team.
Think about how sad it is that these are the kinds of successes you have to cite for a technology that has had as much investment as wasm!
> Almost 6% of page loads use wasm
You can disable wasm and successfully load more than 94% of websites.
A lot of that 6% is malicious ads running bitcoin mining.
> Wasm is way too complicated to use today.
I'm articulating why it's complicated. I think that for those same reasons, it will continue to be complicated
> Where it is currently fairly painful is in writing traditional websites, given all the glue code required to interact with the DOM - exactly what these folks are trying to solve.
I don't think they will succeed at solving the pain, for the reasons I have enumerated in this thread.
By the same token, was Java or Flash more dangerous than JS? On paper, no - all the same, just three virtual machines. But having all three in a browser made things fun back in the early 2000s.
This is such a bizarre take that I don't know whether it's just a trolling attempt or serious...
Why should web-devs switch to WASM unless they have a specific problem to solve where WASM is the better alternative to JS? The two technologies live side by side, each with specific advantages and disadvantages, they are not competing with each other.
WASM today has no access to anything that isn't given to it from JS. That means that the only possible places to exploit are bugs in the JIT, something that exists as well for JavaScript.
Even WASM gets bindings to the DOM, it's surface area is still smaller as Javascript has access to a bunch more APIs that aren't the DOM. For example, WebUSB.
And even if WASM gets feature parity with Javascript, it will only be as dangerous as Javascript itself. The main actual risk for WASM would be the host language having memory safety bugs (such as C++).
So why was Java and Flash dangerous in the browser (and activex, NaCL).
The answer is quite simple. Those VMs had dangerous components in them. Both Java and Flash had the ability to reach out and scribble on a random dll in the operating system or to upload a random file from the user folder. Java relied HEAVILY on the security manager stopping you from doing that, IDK what flash used. Javascript has no such capability (well, at least it didn't when flash and Java were in the browser, IDK about now). For Java, you were running in a full JVM which means a single exploit gave you the power to do whatever the JVM was capable of doing. For Javascript, an exploit on Javascript still bound you to the javascript sandbox. That mostly meant that you might expose information for the current webpage.
I'm being serious.
> Why should web-devs switch to WASM unless they have a specific problem to solve where WASM is the better alternative to JS?
They mostly shouldn't. There are very few problems where wasm is better.
If you want to understand why wasm is not better, see my other posts in this thread.
I'm trying to explain to you why attempts to make wasm mainstream have failed so far, and are likely to continue to fail.
I'm not expressing an "opinion"; I'm give you the inside baseball as a browser engineer.
> Getting rid of the glue layer
I'm trying to elucidate why that glue layer is inherent, and why JS is the language that has ended up dominating web development, despite the fact that lots of "obviously better" languages have gone head to head with it (Java, Dart sort of, and now wasm).
Just like Java is a fantastic language anywhere but the web, wasm seems to be a fantastic sandboxing platform in lots of places other than the web. I'm not trying to troll you folks; I'm just sharing the insight of why wasm hasn't worked out so far in browsers and why that's likely to continue
This post is an expanded version of a presentation I gave at the 2025 WebAssembly CG meeting in Munich.
WebAssembly has come a long way since its first release in 2017. The first version of WebAssembly was already a great fit for low-level languages like C and C++, and immediately enabled many new kinds of applications to efficiently target the web.
Since then, the WebAssembly CG has dramatically expanded the core capabilities of the language, adding shared memories, SIMD, exception handling, tail calls, 64-bit memories, and GC support, alongside many smaller improvements such as bulk memory instructions, multiple returns, and reference values.
These additions have allowed many more languages to efficiently target WebAssembly. There’s still more important work to do, like stack switching and improved threading, but WebAssembly has narrowed the gap with native in many ways.
Yet, it still feels like something is missing that’s holding WebAssembly back from wider adoption on the Web.
There are multiple reasons for this, but the core issue is that WebAssembly is a second-class language on the web. For all of the new language features, WebAssembly is still not integrated with the web platform as tightly as it should be.
This leads to a poor developer experience, which pushes developers to only use WebAssembly when they absolutely need it. Oftentimes JavaScript is simpler and “good enough”. This means its users tend to be large companies with enough resources to justify the investment, which then limits the benefits of WebAssembly to only a small subset of the larger Web community.
Solving this issue is hard, and the CG has been focused on extending the WebAssembly language. Now that the language has matured significantly, it’s time to take a closer look at this. We’ll go deep into the problem, before talking about how WebAssembly Components could improve things.
At a very high level, the scripting part of the web platform is layered like this:

WebAssembly can directly interact with JavaScript, which can directly interact with the web platform. WebAssembly can access the web platform, but only by using the special capabilities of JavaScript. JavaScript is a first-class language on the web, and WebAssembly is not.
This wasn’t an intentional or malicious design decision; JavaScript is the original scripting language of the Web and co-evolved with the platform. Nonetheless, this design significantly impacts users of WebAssembly.
What are these special capabilities of JavaScript? For today’s discussion, there are two major ones:
WebAssembly code is unnecessarily cumbersome to load. Loading JavaScript code is as simple as just putting it in a script tag:
<script src="script.js"></script>
WebAssembly is not supported in script tags today, so developers need to use the WebAssembly JS API to manually load and instantiate code.
let bytecode = fetch(import.meta.resolve('./module.wasm'));
let imports = { ... };
let { exports } =
await WebAssembly.instantiateStreaming(bytecode, imports);
The exact sequence of API calls to use is arcane, and there are multiple ways to perform this process, each of which has different tradeoffs that are not clear to most developers. This process generally just needs to be memorized or generated by a tool for you.
Thankfully, there is the esm-integration proposal, which is already implemented in bundlers today and which we are actively implementing in Firefox. This proposal lets developers import WebAssembly modules from JS code using the familiar JS module system.
import { run } from "/module.wasm";
run();
In addition, it allows a WebAssembly module to be loaded directly from a script tag using type=”module”:
<script type="module" src="/module.wasm"></script>
This streamlines the most common patterns for loading and instantiating WebAssembly modules. However, while this mitigates the initial difficulty, we quickly run into the real problem.
Using a Web API from JavaScript is as simple as this:
console.log("hello, world");
For WebAssembly, the situation is much more complicated. WebAssembly has no direct access to Web APIs and must use JavaScript to access them.
The same single-line console.log program requires the following JavaScript file:
// We need access to the raw memory of the Wasm code, so
// create it here and provide it as an import.
let memory = new WebAssembly.Memory(...);
function consoleLog(messageStartIndex, messageLength) {
// The string is stored in Wasm memory, but we need to
// decode it into a JS string, which is what DOM APIs
// require.
let messageMemoryView = new UInt8Array(
memory.buffer, messageStartIndex, messageLength);
let messageString =
new TextDecoder().decode(messageMemoryView);
// Wasm can't get the `console` global, or do
// property lookup, so we do that here.
return console.log(messageString);
}
// Pass the wrapped Web API to the Wasm code through an
// import.
let imports = {
"env": {
"memory": memory,
"consoleLog": consoleLog,
},
};
let { instance } =
await WebAssembly.instantiateStreaming(bytecode, imports);
instance.exports.run();
And the following WebAssembly file:
(module
;; import the memory from JS code
(import "env" "memory" (memory 0))
;; import the JS consoleLog wrapper function
(import "env" "consoleLog"
(func $consoleLog (param i32 i32))
)
;; export a run function
(func (export "run")
(local i32 $messageStartIndex)
(local i32 $messageLength)
;; create a string in Wasm memory, store in locals
...
;; call the consoleLog method
local.get $messageStartIndex
local.get $messageLength
call $consoleLog
)
)
Code like this is called “bindings” or “glue code” and acts as the bridge between your source language (C++, Rust, etc.) and Web APIs.
This glue code is responsible for re-encoding WebAssembly data into JavaScript data and vice versa. For example, when returning a string from JavaScript to WebAssembly, the glue code may need to call a malloc function in the WebAssembly module and re-encode the string at the resulting address, after which the module is responsible for eventually calling free.
This is all very tedious, formulaic, and difficult to write, so it is typical to generate this glue automatically using tools like embind or wasm-bindgen. This streamlines the authoring process, but adds complexity to the build process that native platforms typically do not require. Furthermore, this build complexity is language-specific; Rust code will require different bindings from C++ code, and so on.
Of course, the glue code also has runtime costs. JavaScript objects must be allocated and garbage collected, strings must be re-encoded, structs must be deserialized. Some of this cost is inherent to any bindings system, but much of it is not. This is a pervasive cost that you pay at the boundary between JavaScript and WebAssembly, even when the calls themselves are fast.
This is what most people mean when they ask “When is Wasm going to get DOM support?” It’s already possible to access any Web API with WebAssembly, but it requires JavaScript glue code.
From a technical perspective, the status quo works. WebAssembly runs on the web and many people have successfully shipped software with it.
From the average web developer’s perspective, though, the status quo is subpar. WebAssembly is too complicated to use on the web, and you can never escape the feeling that you’re getting a second class experience. In our experience, WebAssembly is a power user feature that average developers don’t use, even if it would be a better technical choice for their project.
The average developer experience for someone getting started with JavaScript is something like this:

There’s a nice gradual curve where you use progressively more complicated features as the scope of your project increases.
By comparison, the average developer experience for someone getting started with WebAssembly is something like this:

You immediately must scale “the wall” of wrangling the many different pieces to work together. The end result is often only worth it for large projects.
Why is this the case? There are several reasons, and they all directly stem from WebAssembly being a second class language on the web.
Any language targeting the web can’t just generate a Wasm file, but also must generate a companion JS file to load the Wasm code, implement Web API access, and handle a long tail of other issues. This work must be redone for every language that wants to support the web, and it can’t be reused for non-web platforms.
Upstream compilers like Clang/LLVM don’t want to know anything about JS or the web platform, and not just for lack of effort. Generating and maintaining JS and web glue code is a specialty skill that is difficult for already stretched-thin maintainers to justify. They just want to generate a single binary, ideally in a standardized format that can also be used on platforms besides the web.
The result is that support for WebAssembly on the web is often handled by third-party unofficial toolchain distributions that users need to find and learn. A true first-class experience would start with the tool that users already know and have installed.
This is, unfortunately, many developers’ first roadblock when getting started with WebAssembly. They assume that if they just have rustc installed and pass a –target=wasm flag that they’ll get something they could load in a browser. You may be able to get a WebAssembly file doing that, but it will not have any of the required platform integration. If you figure out how to load the file using the JS API, it will fail for mysterious and hard-to-debug reasons. What you really need is the unofficial toolchain distribution which implements the platform integration for you.
The web platform has incredible documentation compared to most tech platforms. However, most of it is written for JavaScript. If you don’t know JavaScript, you’ll have a much harder time understanding how to use most Web APIs.
A developer wanting to use a new Web API must first understand it from a JavaScript perspective, then translate it into the types and APIs that are available in their source language. Toolchain developers can try to manually translate the existing web documentation for their language, but that is a tedious and error prone process that doesn’t scale.
If you look at all of the JS glue code for the single call to console.log above, you’ll see that there is a lot of overhead. Engines have spent a lot of time optimizing this, and more work is underway. Yet this problem still exists. It doesn’t affect every workload, but it’s something every WebAssembly user needs to be careful about.
Benchmarking this is tricky, but we ran an experiment in 2020 to precisely measure the overhead that JS glue code has in a real world DOM application. We built the classic TodoMVC benchmark in the experimental Dodrio Rust framework and measured different ways of calling DOM APIs.
Dodrio was perfect for this because it computed all the required DOM modifications separately from actually applying them. This allowed us to precisely measure the impact of JS glue code by swapping out the “apply DOM change list” function while keeping the rest of the benchmark exactly the same.
We tested two different implementations:

The duration to apply the DOM changes dropped by 45% when we were able to remove JS glue code. DOM operations can already be expensive; WebAssembly users can’t afford to pay a 2x performance tax on top of that. And as this experiment shows, it is possible to remove the overhead.
There’s a saying that “abstractions are always leaky”.
The state of the art for WebAssembly on the web is that every language builds their own abstraction of the web platform using JavaScript. But these abstractions are leaky. If you use WebAssembly on the web in any serious capacity, you’ll eventually hit a point where you need to read or write your own JavaScript to make something work.
This adds a conceptual layer which is a burden for developers. It feels like it should just be enough to know your source language, and the web platform. Yet for WebAssembly, we require users to also know JavaScript in order to be a proficient developer.
This is a complicated technical and social problem, with no single solution. We also have competing priorities for what is the most important problem with WebAssembly to fix first.
Let’s ask ourselves: In an ideal world, what could help us here?
What if we had something that was:
If such a thing existed, languages could generate these artifacts and browsers could run them, without any JavaScript involved. This format would be easier for languages to support and could potentially exist in standard upstream compilers, runtimes, toolchains, and popular packages without the need for third-party distributions. In effect, we could go from a world where every language re-implements the web platform integration using JavaScript, to sharing a common one that is built directly into the browser.
It would obviously be a lot of work to design and validate a solution! Thankfully, we already have a proposal with these goals that has been in development for years: the WebAssembly Component Model.
For our purposes, a WebAssembly Component defines a high-level API that is implemented with a bundle of low-level WebAssembly code. It’s a standards-track proposal in the WebAssembly CG that’s been in development since 2021.
Already today, WebAssembly Components…
If you’re interested in more details, check out the Component Book or watch “What is a Component?”.
We feel that WebAssembly Components have the potential to give WebAssembly a first-class experience on the web platform, and to be the missing link described above.
Let’s try to re-create the earlier console.log example using only WebAssembly Components and no JavaScript.
NOTE: The interactions between WebAssembly Components and the web platform have not been fully designed, and the tooling is under active development.
Take this as an aspiration for how things could be, not a tutorial or promise.
The first step is to specify which APIs our application needs. This is done using an IDL called WIT. For our example, we need the Console API. We can import it by specifying the name of the interface.
component {
import std:web/console;
}
The std:web/console interface does not exist today, but would hypothetically come from the official WebIDL that browsers use for describing Web APIs. This particular interface might look like this:
package std:web;
interface console {
log: func(msg: string);
...
}
Now that we have the above interfaces, we can use them when writing a Rust program that compiles to a WebAssembly Component:
use std::web::console;
fn main() {
console::log(“hello, world”);
}
Once we have a component, we can load it into the browser using a script tag.
<script type="module" src="component.wasm"></script>
And that’s it! The browser would automatically load the component, bind the native web APIs directly (without any JS glue code), and run the component.
This is great if your whole application is written in WebAssembly. However, most WebAssembly usage is part of a “hybrid application” which also contains JavaScript. We also want to simplify this use case. The web platform shouldn’t be split into “silos” that can’t interact with each other. Thankfully, WebAssembly Components also address this by supporting cross-language interoperability.
Let’s create a component that exports an image decoder for use from JavaScript code. First we need to write the interface that describes the image decoder:
interface image-lib {
record pixel {
r: u8;
g: u8;
b: u8;
a: u8;
}
resource image {
from-stream:
static async func(bytes: stream<u8>) -> result<image>;
get: func(x: u32, y: u32) -> pixel;
}
}
component {
export image-lib;
}
Once we have that, we can write the component in any language that supports components. The right language will depend on what you’re building or what libraries you need to use. For this example, I’ll leave the implementation of the image decoder as an exercise for the reader.
The component can then be loaded in JavaScript as a module. The image decoder interface we defined is accessible to JavaScript, and can be used as if you were importing a JavaScript library to do the task.
import { Image } from "image-lib.wasm";
let byteStream = (await fetch("/image.file")).body;
let image = await Image.fromStream(byteStream);
let pixel = image.get(0, 0);
console.log(pixel); // { r: 255, g: 255, b: 0, a: 255 }
As it stands today, we think that WebAssembly Components would be a step in the right direction for the web. Mozilla is working with the WebAssembly CG to design the WebAssembly Component Model. Google is also evaluating it at this time.
If you’re interested to try this out, learn to build your first component and try it out in the browser using Jco or from the command-line using Wasmtime. The tooling is under heavy development, and contributions and feedback are welcome. If you’re interested in the in-development specification itself, check out the component-model proposal repository.
WebAssembly has come very far from when it was first released in 2017. I think the best is still yet to come if we’re able to turn it from being a “power user” feature, to something that average developers can benefit from.
JS was dominating web development long before WASM gained steam. This isn't the same situation as "JS beating Java/ActivX for control of the web" (if I follow the thrust of your argument correctly).
WASM has had less than a decade of widespread browser support, terrible no-good DevEx for basically the whole time, and it's still steadily making it's way into more and more of the web.
> terrible no-good DevEx for basically the whole time
I'm telling you why.
> still steadily making it's way into more and more of the web.
It is, but you can still browser the web without it just fine, despite so much investment and (judging by how HN reacts to it) overwhelming enthusiasm from devs