This is exciting! I particularly care more about kqueue but I guess the quote applies to it too.
Let's say I'm building a C program targeting Windows with MinGW & only using Zig as a cross compiler. Is there a way to still statically link MinGW's libc implementation or does this mean that's going away and I can only statically link ziglibc even if it looks like MinGW from the outside?
Because doesn’t OpenBSD block direct syscalls & force everything to go through libc.
I think we either need to make operating systems not in C, or just accept that at some level we rely on C.
Why is the linker too late? Is Zig able to do optimizations in the frontend that, e.g., a linker working with LLVM IR is not?
The same goes for TLA+ and all the other obscure things people think would be great to use with LLMs, and they would, if there was as much training data as there was for JavaScript and Python.
https://github.com/ityonemo/clr
[0] generates a dynamically loaded library which does sketchy shit to access the binary representation of datastructures in the zig compiler, and then transpiles the IR to zig code which has to be rerun to do the analysis.
The biggest thing holding me back from using Zig for important projects is the willingness of my peers to adopt it, but I'm just building projects that I can build myself until they are convinced :)
You might find this interesting: https://www.youtube.com/watch?v=x3hOiOcbgeA
Just joking of course. Those are sadly only in glibc.. :)
- c-ward [0] a libc implementation in Rust
- relibc [1] a libc implementation in Rust mainly for use in the Redox os (but works with linux as well)
- rustix [2] safe bindings to posix apis without using C
[0]: https://github.com/sunfishcode/c-ward
A lot of languages claim to be a C replacement, but Zig is the second language I've seen that seemed like it had a reasonable plan to do so at any appreciable scale. The language makes working with the C ABI pretty easy, but it also has a build system that can seamlessly integrate Zig and C together, as well as having a translate-c that actually works shockingly well in the code I've put through it.
The only thing it didn't do was be 99% compatible with existing C codebases...which was the C++ strategy, the first language I can think of with such a plan. And frankly, I think Zig keeping C's relative simplicity while avoiding some of the pitfalls of the language proper was the better play.
If you specify -target x86_64-windows-gnu -lc then some libc functions are provided by Zig, some are provided by vendored mingw-w64 C files, and you don't need mingw-w64 installed separately; Zig provides everything.
You can still pass --libc libc.txt to link against an externally provided libc, such as a separate mingw-w64 installation you have lying around, or even your own libc installation if you want to mess around with that.
Both situations unchanged.
https://www.kptv.com/2026/01/31/live-labor-unions-rally-marc...
This isn't some hypothetical political agenda I'm using my platform to push. There's a nonzero chance I go out there next weekend to peacefully protest, and get shot like Alex Pretti.
Needless to say, if I get shot by ICE, it's not good for the Zig project. And they've brought the battle to my doorstep, almost literally.
Abolish ICE.
You would need to consider if it is even worth it translating your C code. If the paradigm is identical and the entire purpose would be "haha it is now one language," surely you could just compile and link the C code with libzigc... In my opinion, it's not worth translating code if the benefit of "hey look one language" requires the cost of "let's pray the LLM didn't hallucinate or make a mistake while translating the code."
That's a good way to sell moving over to the zig build system, and eventually zig the language itself in some real-world scenarios imo.
D can compile a project with a C and a D source file with:
dmd foo.d bar.c
./fooI have a friend who is in Minneapolis. He's involved in caravans which are tracking ICE. He wasn't the driver in the last one. But, the ICE vehicle they tailed suddenly started going in a very direct path, instead of randomly driving. The driver figured it out first. They drove to the driver's house and then stood outside of their car for ten minutes staring at his house. Cars in Minnesota have their license plates on both the front and the back.
Is there any justification for that kind of intimidation? Did any of the Trump supporters vote for that? I hear about paid agitators on the left but not that kind of compensated actors. Is his name in a database now once they did the lookup?
/s
while we're talking about printf, can i incept in you the idea of making an io.printf function that does print-then-flush?
How does that work, with syscalls being unable to be called except from the system’s libc? I’d be a bit surprised if any binary’s embedded libc would support this model.
hard disagree (example elsewhere)
---
expanding: so, this means that you can do cross-boundary optimizations without LTO and with pre-built artifacts. I think.
Claude getting the ArrayList API wrong every time was a major reason why
It’s AI generated but should help. I need to test and review it more (noticed it mentions async which isn’t in 0.15.x :| )
I'm not so familiar with D, what is the state of this sort of feature? Is it a built-in tool, or are you talking about the ctod project I found?
In most languages, I've found that source translation features to be woefully lacking and almost always require human intervention. By contrast, it feels like Zig's `translate-c` goes the extra mile in trying to convert the source to something that Zig can work with as-is. It does this by making use of language features and compiler built-ins that are rather rare to see outside of `translate-c`.
Obviously the stacks of @as, @fooCast, and @truncate you are left with isn't idiomatic Zig, but I find it easier to start with working, yet non-idiomatic code than 90% working code that merely underwent a syntactic change.
You keep compatibility with C, can tap into its ecosystem, but you are no longer stuck with outdated tooling
D gives you faster iteration, clearer diagnostics, and a generally smoother experience, even if it doesn't go as far as Rust in terms of safety
I wish more languages would follow this strategy, ImportC is great, let's you port things one step at a time, if required/needed
Let's be honest: who wants to write or generate C bindings? And who wants to risk porting robust/tested/maintained C code incorrectly?
I do like D. I've written a game in it and enjoyed it a lot. I would encourage others to check it out.
But it's not a C replacement. BetterC feels like an afterthought. A nice bonus. Not a primary focus. E.g. the language is designed to use exceptions for error handling, so of course there's no feature for BetterC dedicated to error handling.
Being a better C is the one and only focus of Zig. So it has features for doing error handling without exceptions.
D is not going to replace C, perhaps for the same reasons subsets of C++ didn't.
I don't know if Zig and Rust will. But there's a better chance since they actually bring a lot of stuff to the table that arguably make them better at being a C-like language than C. I am really hyped to see how embedded development will be in Zig after the new IO interface lands.
(With that said, OpenBSD promises no stability if you choose to bypass libc. What it promises instead is that it will change things in incompatible ways that will hurt. It’s up to you whether the pain that thus results from supporting OpenBSD is worth it.)
OpenBSD allows system calls being made from shared libraries whose names start with `libc.so.' and all static binaries, as long as they include an `openbsd.syscalls' section listing call sites.
I will say first that C libc does this - the functions are inline defined in header files, but this is mainly a pre-LTO artifact.
Otherwise it has no particular advantage other than disk space, it's the equivalent of just catting all your source files together and compiling that. If you thikn it's better to do in the frontend, cool, you could make it so all the code gets seen by the frontend by fake compiling all the stuff, writing the original source to an object file special section, and then make the linker really call the frontend with all those special sections.
You can even do it without the linker if you want.
Now you have all the code in the frontend if that's what you want (I have no idea why you'd want this).
It has the disadvantage that it's the equivalent of this, without choice.
If you look far enough back, lots of C/C++ projects used to do this kind of thing when they needed performance in the days before LTO, or they just shoved the function definitions in header files, but stopped because it has a huge forced memory and compilation speed footprint.
Then we moved to precompiled headers to fix the latter, then LTO to fix the former and the latter.
Everything old is new again.
In the end, you are also much better off improving the ability to take lots of random object files with IR and make it optimize well than trying to ensure that all possible source code will be present to the frontend for a single compile. Lots of languages and compilers went down this path and it just doesn't work in practice for real users.
So doing stuff in the linker (and it's not really the linker, the linker is just calling the compiler with the code, whether that compiler is a library or a separate executable) is not a hack, it's the best compilation strategy you can realistically use, because the latter is essentially a dream land where nobody has third party libraries they link or subprojects that are libraries or multiple compilation processes and ....
Zig always seems to do this thing in blog posts and elsewhere where they add these remarks that often imply there is only one true way of doing it right and they are doing it. It often comes off as immature and honestly a turnoff from wanting to use it for real.
Well, most macros. The macros that do metaprogramming are not translatable. I read that Zig's translator has the same issue, which is hardly surprising since it is not possible.
So, yes, the translation is not perfect. But the result works out of the box most of the time, and what doesn't translate is easily fixed by a human. Another issue is every C compiler has their own wacky extensions, so it is impractical to deal with all those variants. We try to hit the common extensions, though.
If you just want to call C code, you don't have to translate it. The D compiler recognizes C files and will run its very own internal C compiler (ImportC) to compile it. As a bonus, the C code can use data structures and call functions written in D! The compatibility goes both ways.
Not me, and not anyone else. Many D users have commented on how ImportC eliminates the tedium of interfacing to me.
And with D, you don't have to write .h interface files, either (although you can, but it turns out pretty much nobody bothers to).
One interesting result of ImportC is that it is an enhanced implementation of C in that it can do forward references, Compile Time Function Execution, and even imports! (It can also translate C source code to D source code!)
If D really wants to compete with others for a "better C replacement", I think the language might need some kind of big overhaul (a re-launch?). It's evident that there's a smaller, more beautiful language that can potentially be born from D, but in order for this language to succeed it needs to trim down all the baggage that comes from its GC-managed past. I think the best place to start is to properly remove GC / exception handling / RTTI from the languge cleanly, rewrite the standard library to work with BetterC mode, and probably also change the name to something else (needs a re-brand...)
Walter's short limited comment was quite relevant.
In this case, however, Walter was not the one that brought up D. He was replying to a comment by someone promoting Zig with the claim that only Zig and C++ have ever had a strategy to replace C. That is objectively false. There's no way to look at what D does in that area and make that sort of claim. Walter and anyone else is right to challenge false statements.
February 03, 2026
Author: Andrew Kelley
Microslop Windows provides a large ABI surface area for doing things in the kernel. However, not all ABIs are created equally. As Casey Muratori points out in his lecture, The Only Unbreakable Law, the organizational structure of software development teams has a direct impact on the structure of the software they produce.
The DLLs on Windows are organized into a heirarchy, with some of the APIs being high-level wrappers around lower-level ones. For example, whenever you call functions of kernel32.dll, ultimately, the actual work is done by ntdll.dll. You can observe this directly by using ProcMon.exe and examining stack traces.
What we’ve learned empirically is that the ntdll APIs are generally well-engineered, reasonable, and powerful, but the kernel32 wrappers introduce unnecessary heap allocations, additional failure modes, unintentional CPU usage, and bloat. Using ntdll functions feels like using software made by senior engineers, while using kernel32 functions feels like using software made by Microsoft employees.
This is why the Zig standard library policy is to avoid all DLLs except for ntdll. We’re not quite there yet - we have plenty of calls into kernel32 remaining - but we’ve taken great strides recently. I’ll give you two examples.
According to the official documentation, Windows does not have a straightforward way to get random bytes.
Many projects including Chromium, boringssl, Firefox, and Rust call SystemFunction036 from advapi32.dll because it worked on versions older than Windows 8.
Unfortunately, starting with Windows 8, the first time you call this function, it dynamically loads bcryptprimitives.dll and calls ProcessPrng. If loading the DLL fails (for example due to an overloaded system, which we have observed on Zig CI several times), it returns error 38 (from a function that has void return type and is documented to never fail).
The first thing ProcessPrng does is heap allocate a small, constant number of bytes. If this fails it returns NO_MEMORY in a BOOL (documented behavior is to never fail, and always return TRUE).
bcryptprimitives.dll apparently also runs a test suite every time you load it.
All that ProcessPrng is really doing is NtOpenFile on "\\Device\\CNG" and reading 48 bytes with NtDeviceIoControlFile to get a seed, and then initializing a per-CPU AES-based CSPRNG.
So the dependency on bcryptprimitives.dll and advapi32.dll can both be avoided, and the nondeterministic failure and latencies on first RNG read can also be avoided.
ReadFile looks like this:
pub extern "kernel32" fn ReadFile(
hFile: HANDLE,
lpBuffer: LPVOID,
nNumberOfBytesToRead: DWORD,
lpNumberOfBytesRead: ?*DWORD,
lpOverlapped: ?*OVERLAPPED,
) callconv(.winapi) BOOL;
NtReadFile looks like this:
pub extern "ntdll" fn NtReadFile(
FileHandle: HANDLE,
Event: ?HANDLE,
ApcRoutine: ?*const IO_APC_ROUTINE,
ApcContext: ?*anyopaque,
IoStatusBlock: *IO_STATUS_BLOCK,
Buffer: *anyopaque,
Length: ULONG,
ByteOffset: ?*const LARGE_INTEGER,
Key: ?*const ULONG,
) callconv(.winapi) NTSTATUS;
As a reminder, the above function is implemented by calling the below function.
Already we can see some nice things about using the lower level API. For instance, the real API simply gives us the error code as the return value, while the kernel32 wrapper hides the status code somewhere, returns a BOOL and then requires you to call GetLastError to find out what went wrong. Imagine! Returning a value from a function 🌈
Furthermore, OVERLAPPED is a fake type. The Windows kernel doesn’t actually know or care about it at all! The actual primitives here are events, APCs, and IO_STATUS_BLOCK.
If you have a synchronous file handle, then Event and ApcRoutine must be null. You get the answer in the IO_STATUS_BLOCK immediately. If you pass an APC routine here then some old bitrotted 32-bit code runs and you get garbage results.
On the other hand if you have an asynchronous file handle, then you need to either use an Event or an ApcRoutine. kernel32.dll uses events, which means that it’s doing extra, unnecessary resource allocation and management just to read from a file. Instead, Zig now passes an APC routine and then calls NtDelayExecution. This integrates seamlessly with cancelation, making it possible to cancel tasks while they perform file I/O, regardless of whether the file was opened in synchronous mode or asynchronous mode.
January 31, 2026
Author: Andrew Kelley
Over the past month or so, several enterprising contributors have taken an interest in the zig libc subproject. The idea here is to incrementally delete redundant code, by providing libc functions as Zig standard library wrappers rather than as vendored C source files. In many cases, these functions are one-to-one mappings, such as memcpy or atan2, or trivially wrap a generic function, like strnlen:
fn strnlen(str: [*:0]const c_char, max: usize) callconv(.c) usize {
return std.mem.findScalar(u8, @ptrCast(str[0..max]), 0) orelse max;
}
So far, roughly 250 C source files have been deleted from the Zig repository, with 2032 remaining.
With each function that makes the transition, Zig gains independence from third party projects and from the C programming language, compilation speed improves, Zig’s installation size is simplified and reduced, and user applications which statically link libc enjoy reduced binary size.
Additionally, a recent enhancement now makes zig libc share the Zig Compilation Unit with other Zig code rather than being a separate static archive, linked together later. This is one of the advantages of Zig having an integrated compiler and linker. When the exported libc functions share the ZCU, redundant code is eliminated because functions can be optimized together. It’s kind of like enabling LTO (Link-Time Optimization) across the libc boundary, except it’s done properly in the frontend instead of too late, in the linker.
Furthermore, when this work is combined with the recent std.Io changes, there is potential for users to seamlessly control how libc performs I/O - for example forcing all calls to read and write to participate in an io_uring event loop, even though that code was not written with such use case in mind. Or, resource leak detection could be enabled for third-party C code. For now this is only a vaporware idea which has not been experimented with, but the idea intrigues me.
Big thanks to Szabolcs Nagy for libc-test. This project has been a huge help in making sure that we don’t regress any math functions.
As a reminder to our users, now that Zig is transitioning to being the static libc provider, if you encounter issues with the musl, mingw-w64, or wasi-libc libc functionality provided by Zig, please file bug reports in Zig first so we don’t annoy maintainers for bugs that are in Zig, and no longer vendored by independent libc implementation projects.
Abolish ICE.
Anyway, C doesn't have Rust's core versus std distinction and so libc is a muddle of both the "Just useful library stuff" like strlen or qsort and features like open which are bound to the operating system specifics.
What I actually said was that it was the second language I have seen to do so at any appreciable scale. I never claimed to know all languages. There was also an implication that I think that even if a language claims to be a C replacement, its ability to do so might exceed its ambition.
That said I also hold no ill will towards Walter Bright, and in fact was hoping that someone like him would hop into the conversation to try and sell people on why their language was also worthy of consideration. I don't even mind the response to Walter's post, because they bring real-world Dlang experience to the table as a rebuttal.
On the other hand, I find it difficult to find value in your post except as a misguided and arguably bad-faith attempt to stir the pot.
Did the text get changed? because it seems you claim exactly the opposite of what is in about ~5 sentences, so it also can't be credited to "misunderstanding".
But didn't find any "D evangelism" comments in his history (first page), but then again, he has 78801 karma points, so I am also not going to put energy in going through his online persona history.
BTW, in my C days, I did a lot of clever stuff with the preprocessor. I was very proud of it. One day I decided to replace the clever macros with core C code, and was quite pleased with the clean result.
With D modules, imports, static if, manifest constants, and templates the macro processor can be put on the ash heap of history. Why doesn't C++ deprecate cpp?