Yikes that’s a lot of memory! Filc is doing a lot of static analysis apparently.
> Fil-C is a fanatically compatible memory-safe implementation of C and C++. Lots of software compiles and runs with Fil-C with zero or minimal changes. All memory safety errors are caught as Fil-C panics. Fil-C achieves this using a combination of concurrent garbage collection and invisible capabilities (InvisiCaps). Every possibly-unsafe C and C++ operation is checked. Fil-C has no unsafe statement and only limited FFI to unsafe code.
The posted article has a detailed explanation of djb successfully compiling a bunch of C and C++ codebases.
Previously there was that Rust in APT discussion. A lot of this middle-aged linux infrastructure stuff is considered feature-complete and "done". Not many young people are coming in, so you either attract them with "heyy rewrite in rust" or maybe the best thing is to bottle it up and run in a VM.
Anyone really tried building PG or MySQL or such a complex system which heavily relies on IO operations and multi threading capabilities
https://medium.com/@ewindisch/curl-bash-a-victimless-crime-d...
AFAIK, djb isn't for many "some 3letter guy" for over about thirty years but perhaps it's just age related issue with those less been around.
Though we don't refer to it as DAH encoding, so ... ¯\_(ツ)_/¯
Fil-C: A memory-safe C implementation - https://news.ycombinator.com/item?id=45735877 - Oct 2025 (130 comments)
Safepoints and Fil-C - https://news.ycombinator.com/item?id=45258029 - Sept 2025 (44 comments)
Fil's Unbelievable Garbage Collector - https://news.ycombinator.com/item?id=45133938 - Sept 2025 (281 comments)
InvisiCaps: The Fil-C capability model - https://news.ycombinator.com/item?id=45123672 - Sept 2025 (2 comments)
Just some of the memory safety errors caught by Fil-C - https://news.ycombinator.com/item?id=43215935 - March 2025 (5 comments)
The Fil-C Manifesto: Garbage In, Memory Safety Out - https://news.ycombinator.com/item?id=42226587 - Nov 2024 (1 comment)
Rust haters, unite Fil-C aims to Make C Great Again - https://news.ycombinator.com/item?id=42219923 - Nov 2024 (6 comments)
Fil-C a memory-safe version of C and C++ - https://news.ycombinator.com/item?id=42158112 - Nov 2024 (1 comment)
Fil-C: Memory-Safe and Compatible C/C++ with No Unsafe Escape Hatches - https://news.ycombinator.com/item?id=41936980 - Oct 2024 (4 comments)
The Fil-C Manifesto: Garbage In, Memory Safety Out - https://news.ycombinator.com/item?id=39449500 - Feb 2024 (17 comments)
In addition, here are the major related subthreads from other submissions:
https://news.ycombinator.com/item?id=45568231 (Oct 2025)
https://news.ycombinator.com/item?id=45444224 (Oct 2025)
https://news.ycombinator.com/item?id=45235615 (Sept 2025)
https://news.ycombinator.com/item?id=45087632 (Aug 2025)
https://news.ycombinator.com/item?id=44874034 (Aug 2025)
https://news.ycombinator.com/item?id=43979112 (May 2025)
https://news.ycombinator.com/item?id=43948014 (May 2025)
https://news.ycombinator.com/item?id=43353602 (March 2025)
https://news.ycombinator.com/item?id=43195623 (Feb 2025)
https://news.ycombinator.com/item?id=43188375 (Feb 2025)
https://news.ycombinator.com/item?id=41899627 (Oct 2024)
https://news.ycombinator.com/item?id=41382026 (Aug 2024)
https://news.ycombinator.com/item?id=40556083 (June 2024)
https://news.ycombinator.com/item?id=39681774 (March 2024)
There may be useful takeaways here for Rust’s “unsafe” mode - particularly for applications willing to accept the extra burden of statically linking Fil-C-compiled dependencies. Best of both worlds!
But GCs aren't magic and you will never get rid of all the overhead. Even if the CPU time is not noticeable in your use case, the memory usage fundamentally needs to be at least 2-4x the actual working set of your program for GCs to be efficient. That's fine for a lot of use cases, especially when RAM isn't scarce.
Most people who use C or C++ or Rust have already made this calculation and deemed the cost to be something they don't want to take on.
That's not to say Fil-C isn't impressive, but it fills a very particular niche. In short, if you're bothering with a GC anyway, why wouldn't you also choose a better language than C or C++?
Also maybe of interest is that the new cdb subdomain is using pqconnect instead of dnscurve
DJB SMACKER CONFIRMED?!
I wonder how / where Epic Games comes in?
1) Rewrite X in Rust
2) Recompile X using Fil-C
3) Recompile X for WASM
4) Safety is for babies
There are a lot of half baked Rust rewrites whose existence was justified on safety grounds and whose rationale is threatened now that HN has heard of Fil-C
"Oh, it has a GC! GC bad!"
"No, this GC by smart guy, so good!"
"No, GC always bad!"
People aren't engaging with the technical substance. GC based systems and can be plenty good and fast. How do people think JavaScript works? And Go? It's like people just absorbed from the discursive background radiation the idea GC is slow without understanding why that might be or whether it's even true. Of course it's not.
As near as I can tell Fil-C doesn't support this, or any other sort of FFI, at all. Nor am I sure FFI would even make sense, it seems like an approach that has to take over the entire program so that it can track pointer provenance.
IMO cryptographers should start using Rust for their reference implementations, but I also get that they'd rather spend their time working on their next paper rather than learning a new language.
This is not correct. There isn't a cdb subdomain because cdb.cr.yp.to doesn't have NS records, which is where DNSCurve fits in. If you have a DNSCurve resolver, then your queries for cdb.cr.yp.to will use DNSCurve and will be sent to the yp.to nameservers.
From there, if you have pqconnect, your http(s) connection to cdb.cr.yp.to will happen over pqconnect.
Maybe the confusion is because both DNSCurve and pqconnect encode pubkeys in DNS, but they do different things.
Here is DNSCurve:
$ dig +short ns yp.to
uz5jmyqz3gz2bhnuzg0rr0cml9u8pntyhn2jhtqn04yt3sm5h235c1.yp.to.
Here is pqconnect: $ dig +short cdb.cr.yp.to
pq1htvv9k4wkfcmpx6rufjlt1qrr4mnv0dzygx5mlrjdfsxczbnzun055g15fg1.yp.to.
131.193.32.108
Like CurveCP, pqconnect puts the pubkey into a CNAME.The notes on using Fil-C were submitted three days ago
"A domain is identified by a domain name, and consists of that part of the domain name space that is at or below the domain name which specifies the domain. A domain is a subdomain of another domain if it is contained within that domain. This relationship can be tested by seeing if the subdomain's name ends with the containing domain's name. For example, A.B.C.D is a subdomain of B.C.D, C.D, D, and " "."
1 cdb.cr.yp.to - regular DNS:
124 bytes, 1+2+0+0 records, response, noerror
query: 1 cdb.cr.yp.to
answer: cdb.cr.yp.to 30 CNAME pq1jbw2qzb2201xj6pyx177b8frqltf7t4wdpp32fhk0w3h70uytq5020w020l0.yp.to
answer: pq1jbw2qzb2201xj6pyx177b8frqltf7t4wdpp32fhk0w3h70uytq5020w020l0.yp.to 30 A 131.193.32.109
In the terminology of RFC1034, cdb.cr.yp.to, a CNAME, can be described as a subdomain of cr.yp.to and yp.to(NB. The pq1 portion is not a public key, it is a hash of a server's long-term public key)
It's not a target for writing new code (you'd be better off with C# or golang), but something like sandboxing with WASM, except that Fil-C crashes more precisely.
Rust would be about what language to use for new code.
Now that I have been programming in Rust for a couple of years, I don't want to go back to C (except for some hobby projects).
There is no C or C++ memory safe compiler with acceptable performance for kernels, rendering, games, etc. For that you need Rust.
The future includes Fil-C for legacy code that isn’t performance sensitive and Rust for new code that is.
Be careful who you trust when installing software is a fine thing to teach. But that doesn't mean the only people you can trust are Linux distro packagers.
In fact, I think Fil-C and CHERI could implement 90% the same programmer-level API!
Very slowly. Java, OCaml, or LuaJIT would be better examples here!
But maybe you could use C as the "glue language" and then the build better performing libraries in Rust for C to use. Like in Python!
Thus, Fil-C compiled code is 1 to 4 times as slow as plain C. This is not in the "significantly slower" ballpark, like where most interpreters are. The ROOT C/C++ interpreter is 20+ times slower than binary code, for example.
Still, it's all LLVM, so perhaps unsafe Rust for Fil-space can be a thing, a useful one for catching (what would be) UBs even [Fil-C defines everything, so no UBs, but I'm assuming you want to eventually run it outside of Fil-space].
Now I actually wonder if Fil-C has an escape hatch somewhere for syscalls that it does not understand etc. Well it doesn't do inline assembly, so I shouldn't expect much... I wonder how far one needs to extend the asm clobber syntax for it to remotely come close to working.
For new code, I would not use Fil-C. For kernel and low-level tools, other languages seem better. Right now, Rust is the only popular language in this space that doesn't have these disadvantages. But in my view, Rust also has issues, specially the borrow checker, and code verbosity. Maybe in the future there will be a language that resolves these issues as well (as a hobby, I'm trying to build such a language). But right now, Rust seems to be the best choice for the kernel (for code that needs to be fast and secure).
It's low hanging fruit, and a great way to further differentiate their Linux distribution.
If that happens its game over. As the article I linked noted, the attackers can change the installation instructions to anything they want - even for packages that are available in Linux distros.
And size. About 10x increase both on disk and in memory
$ stat -c '%s %n' {/opt/fil,}/bin/bash
15299472 /opt/fil/bin/bash
1446024 /bin/bash
$ ps -eo rss,cmd | grep /bash
34772 /opt/fil/bin/bash
4256 /bin/bashFil-C is useful for the long tail of C/C++ that no one will bother to rewrite and is still usable if slow.
But yes, that the run arbitrary scripts is also a known issue, but this is not the main point as most code you download will be run at some point (and ideally this needs sandboxing of applications to fix).
I'm impressed with the level of compatibility of the new memory-safe C/C++ compiler Fil-C (filcc, fil++). Many libraries and applications that I've tried work under Fil-C without changes, and the exceptions haven't been hard to get working.
I've started accumulating miscellaneous notes on this page regarding usage of Fil-C. My selfish objective here is to protect various machines that I manage by switching them over to code compiled with Fil-C, but maybe you'll find something useful here too.
Timings below are from a mini-PC named phoenix except where otherwise mentioned. This mini-PC has a 6-core (12-thread) AMD Ryzen 5 7640HS (Zen 4) CPU, 12GB RAM, and 36GB swap. The OS is Debian 13. (I normally run LTS software, periodically upgrading from software that's 4–5 years old such as Debian 11 today to software that's 2–3 years old such as Debian 12 today; but some of the packages included in Fil-C expect newer utilities to be available.)
Related:
Another way to run Fil-C is via Filnix from Mikael Brockman. For example, an unprivileged user under Debian 12 with about 10GB of free disk space can download, compile, and install Fil-C, and run a Fil-C-compiled Nethack, as follows:
unshare --user --pid echo YES # just to test
git clone https://github.com/nix-community/nix-user-chroot
cd nix-user-chroot
cargo build --release
mkdir -m 0755 ~/nix
~/nix-user-chroot/target/release/nix-user-chroot ~/nix \
bash -c 'curl -L https://nixos.org/nix/install | sh'
env TERM=vt102 \
~/nix-user-chroot/target/release/nix-user-chroot ~/nix \
~/nix/store/*-nix-2*/bin/nix \
--extra-experimental-features 'nix-command flakes' \
run 'github:mbrock/filnix#nethack'
Current recommendations for things to do at the beginning as root:
mkdir -p /var/empty
apt install \
autoconf-dickey build-essential bison clang cmake flex gawk \
gettext ninja-build patchelf quilt ruby texinfo time
I created an unprivileged filc user. Everything else is as that user.
I downloaded the Fil-C source package:
git clone https://github.com/pizlonator/fil-c.git
cd fil-c
This isn't just the compiler; there's also glibc and quite a few higher-level libraries and applications. There are also binary Fil-C packages, but I've worked primarily with the source package at this point.
I compiled Fil-C and glibc:
time ./build_all_fast_glibc.sh
There are also options to use musl instead of glibc, but musl is incompatible with some of the packages shipped with Fil-C: attr needs basename, elfutils needs argp_parse, sed's test suite needs the glibc variant of calloc, and vim's build needs iconv to be able to convert from CP932 to UTF-8.
I had originally configured the server phoenix with only 12GB swap. I then had to restart ./build_all_fast_glibc.sh a few times because the Fil-C compilation ran out of memory. Switching to 36GB swap made everything work with no restarts; monitoring showed that almost 19GB swap (plus 12GB RAM) was used at one point. A larger server, 128 cores with 512GB RAM, took 8 minutes for Fil-C plus 6 minutes for musl, with no restarts needed.
Fil-C includes a ./build_all_slow.sh that builds many more libraries and applications (sometimes with patches from the Fil-C author). I wrote a replacement script https://cr.yp.to/2025/build-parallel-20251023.py with the following differences:
On phoenix, running time PATH="$HOME/bin:$HOME/fil-c/build/bin:$HOME/fil-c/pizfix/bin:$PATH" ./build-parallel.py went through 61 targets in 101 minutes real time (467 minutes user time, 55 minutes system time), successfully compiling 60 of them.
libcap. This is the one that didn't compile: /home/filc/fil-c/pizfix/bin/ld: /usr/libexec/gcc/x86_64-linux-gnu/14/liblto_plugin.so: error loading plugin: libc.so.6: cannot open shared object file: No such file or directory
util-linux. I skipped this one. It does compile, but the compiled taskset utility needs to be patched to use sched_getaffinity and sched_setaffinity as library functions rather than via syscall, or Fil-C needs to be patched for those syscalls. This is an issue for build-parallel since build-parallel relies on taskset; maybe build-parallel should instead use Python's affinity functions.
attr, bash, benchmarks, binutils, bison, brotli, bzip2, bzip3, check, cmake, coreutils, cpython, curl, dash, diffutils, elfutils, emacs, expat, ffi, gettext, git, gmp, grep, icu, jpeg-6b, libarchive, libcap, libedit, libevent, libpipeline, libuev, libuv, lua, lz4, m4, make, mg, ncurses, nghttp2, openssh, openssl, pcre2, pcre, perl, pkgconf, procps, quickjs, sed, shadow, simdutf, sqlite, tcl, tmux, toybox, vim, wg14_signals, xml_parser, xz, zlib, zsh, zstd. No problems encountered so far (given whatever patches were already applied from the Fil-C author!). The benchmarks package is supplied with Fil-C and does a few miscellaneous measurements.
I did export PATH="$HOME/bin:$HOME/fil-c/build/bin:$HOME/fil-c/pizfix/bin:$PATH" before these.
boost 1.89.0: Seems to mostly work. Most of the package is header-only; a few simple tests worked fine.
I also looked a bit at the compiled parts. Running ./bootstrap.sh --with-toolset=clang --prefix=$HOME ran into vfork, which Fil-C doesn't support, but editing tools/build/src/engine/execunix.cpp to use defined(__APPLE__) || defined(__FILC__) for the no-fork test got past this.
Running ./b2 install --prefix=$HOME toolset=clang address-model=64 architecture=x86_64 binary-format=elf produced an error message since I should have said x86 instead of x86_64; Fil-C said it caught a safety issue in the b2 program after the error message: filc safety error: argument size mismatch (actual = 8, expected = 16). I didn't compile with debugging so Fil-C didn't say where this is in b2.
cdb-20251021: Seems to work. One regression test, an artificial out-of-memory regression test, currently produces a different error message with Fil-C: filc panic: src/libpas/pas_compact_heap_reservation.c:65: pas_aligned_allocation_result pas_compact_heap_reservation_try_allocate(size_t, size_t): assertion page_result.result failed.
libcpucycles-20250925: Seems to work. I commented out the first three lines of cpucycles/options.
libgc: I replaced this with a small gcshim package (https://cr.yp.to/2025/gcshim-20251022.tar.gz) that simply calls malloc etc. So far this seems to be an adequate replacement. (Fil-C includes a garbage collector.)
libntruprime-20241021: Seems to work after a few tweaks but I didn't collect full notes yet. chmod +t crypto_hashblocks/sha512/avx2 disables assembly and makes things compile; configured with --no-valgrind since Fil-C doesn't support valgrind; did a bit more tweaking to make cpuid work.
lpeg-1.1.0: Compiles, maybe works (depends on lua, dependency of neovim):
cd
PREFIX=$(dirname $(dirname $(which lua)))
wget https://www.inf.puc-rio.br/~roberto/lpeg/lpeg-1.1.0.tar.gz
tar -xf lpeg-1.1.0.tar.gz
cd lpeg-1.1.0
make CC=`which filcc` DLLFLAGS='-shared -fPIC' test
cp lpeg.so $PREFIX/lib
luv-1.51.0: Compiles, maybe works (depends on lua, dependency of neovim):
cd
PREFIX=$(dirname $(dirname $(which lua)))
wget https://github.com/luvit/luv/releases/download/1.51.0-1/luv-1.51.0-1.tar.gz
tar -xf luv-1.51.0-1.tar.gz
cd luv-1.51.0-1
mkdir build
cd build
LUA_DIR=$HOME/fil-c/projects/lua-5.4.7
# lua install should probably do this:
cp $LUA_DIR/lua.h $PREFIX/include/
cp $LUA_DIR/lauxlib.h $PREFIX/include/
cp $LUA_DIR/luaconf.h $PREFIX/include/
cp $LUA_DIR/lualib.h $PREFIX/include/
# and then:
cmake -DCMAKE_C_COMPILER=`which filcc` -DCMAKE_INSTALL_PREFIX=$PREFIX -DWITH_LUA_ENGINE=Lua -DLUA_DIR=$HOME/fil-c/projects/lua-5.4.7/ ..
make test
make install
mutt-2-2-15-rel (depends on ncurses):
wget https://github.com/muttmua/mutt/archive/refs/tags/mutt-2-2-15-rel.tar.gz
tar -xf mutt-2-2-15-rel.tar.gz
cd mutt-mutt-2-2-15-rel
CC=`which clang` ./prepare --prefix=$HOME/fil-c/pizfix --with-homespool
make -j12 install
Seems to work, at least for reading email.
tig (depends on ncurses and maybe more):
wget https://github.com/jonas/tig/releases/download/tig-2.6.0/tig-2.6.0.tar.gz
tar -xf tig-2.6.0.tar.gz
cd tig-2.6.0
CC=`which filcc` ./configure --prefix=$(dirname $(dirname $(which git)))
make -j12
make test
make -j12 install
Seems to work, at least for viewing the Fil-C repo.
w3m (depends on gcshim and ncurses): Seems to work. I tried the Debian version: git clone https://salsa.debian.org/debian/w3m.git. I used CFLAGS=-Wno-incompatible-function-pointer-types (which is probably needed for clang anyway even without Fil-C).
I've built and installed some replacement Debian packages using Fil-C as the compiler on a Debian 13 machine, as explained below. Hopefully this can rapidly scale to many packages, taking advantage of the basic compile-install-test knowledge already built into Debian source packages, although some packages will take more work because they need extra patches to work with Fil-C.
Structure. Debian already understands how to have packages for multiple architectures (ABIs; Debian "ports") installed at once. For example, dpkg --add-architecture i386; apt update; apt install bash:i386 installs a 32-bit version of bash, replacing the usual 64-bit version; you can do apt install bash:amd64 to revert to the 64-bit version. Meanwhile the 32-bit libraries and 64-bit libraries are installed in separate locations, basically /lib/i386-linux-gnu or /usr/lib/i386-linux-gnu vs. /lib/x86_64-linux-gnu or /usr/lib/x86_64-linux-gnu. (On Debian 11 and newer, and on Ubuntu 22.04 and newer, /lib is symlinked to /usr/lib.)
I'm following this model for plugging Fil-C into Debian: the goal is for apt install bash:amd64fil0 to install a Fil-C-compiled (amd64fil0) version of bash, replacing the usual (amd64) version of bash, while the amd64 and amd64fil0 libraries are installed in separate locations.
The include-file complication. Debian expects library packages compiled for multiple ABIs to all provide the same include files: for example, /usr/include/ncurses.h is provided by libncurses-dev:i386, libncurses-dev:amd64, etc. This is safe because Debian forces libncurses-dev:i386 and libncurses-dev:amd64 and so on to all have the same version. An occasional package with ABI-dependent include files can still use /usr/include/x86_64-linux-gnu etc.
Fil-C instead omits /usr/include in favor of a Fil-C-specific directory (which will typically be different from /usr/include: even if Fil-C is compiled with glibc, probably the glibc version won't be the same as in /usr/include). This difference is the top source of messiness below. I'm planning to tweak the Fil-C driver to use /usr/include on Debian. [This is done in the filian-install-compiler script.]
Something else I'm planning to tweak is Fil-C's glibc compilation, so that it uses the final system prefix. [This is also done in the filian-install-compiler script.] The approach described below instead requires /home/filian/fil-c to stay in place for compiling and running programs.
Building Debian packages. How does Debian package building work? First, more packages to install as root:
apt install dpkg-dev devscripts docbook2x \
dh-exec dh-python python3-setuptools fakeroot \
sbuild mmdebstrap uidmap piuparts
Debian has multiple options for building a package. The option that has the best isolation, and that Debian uses to continually build new packages for distribution, is sbuild, but for fast development I'll focus on directly using the lower-level dpkg-buildpackage.
Baseline 1: using sbuild without Fil-C. In case you do want to try sbuild, here's the basic setup, and then an example of building a small package (tinycdb):
mkdir -p ~/shared/sbuild
time mmdebstrap --include=ca-certificates --skip=output/dev --variant=buildd unstable ~/shared/sbuild/unstable-amd64.tar.zst https://deb.debian.org/debian
mkdir -p ~/.config/sbuild
cat << "EOF" > ~/.config/sbuild/config.pl
$chroot_mode = 'unshare';
$external_commands = { "build-failed-commands" => [ [ '%SBUILD_SHELL' ] ] };
$build_arch_all = 1;
$build_source = 1;
$source_only_changes = 1;
$run_lintian = 1;
$lintian_opts = ['--display-info', '--verbose', '--fail-on', 'error,warning', '--info'];
$run_autopkgtest = 1;
$run_piuparts = 1;
$piuparts_opts = ['--no-eatmydata', '--distribution=%r', '--fake-essential-packages=systemd-sysv'];
EOF
mkdir -p ~/shared/packages
cd ~/shared/packages
apt source tinycdb
cd tinycdb-*/
time sbuild
Baseline 2: using dpkg-buildpackage without Fil-C. Here's what it looks like compiling the same small package with dpkg-buildpackage:
mkdir -p ~/shared/packages
cd ~/shared/packages
apt source tinycdb
cd tinycdb-*/
time dpkg-buildpackage -us -uc -b
The goal: Using dpkg-buildpackage with Fil-C. As root, teach dpkg basic features of the new architecture, imitating the current line amd64 x86_64 (amd64|x86_64) 64 little in the same file:
echo amd64fil0 x86_64+fil0 amd64fil0 64 little >> /usr/share/dpkg/cputable
Also, allow apt to install packages compiled for this architecture (beware that this will also later make apt update look for that architecture on servers, and whimper a bit for not finding it, but nothing breaks):
dpkg --add-architecture amd64fil0
Also, teach autoconf to accept amd64fil0 (the third of these lines is what's critical for Debian builds):
sed -i '/| x86_64 / a| x86_64+fil0 \\' /usr/share/autoconf/build-aux/config.sub
sed -i '/| x86_64 / a| x86_64+fil0 \\' /usr/share/libtool/build-aux/config.sub
sed -i '/| x86_64 / a| x86_64+fil0 \\' /usr/share/misc/config.sub
[Not necessary if you've used filian-install-compiler:] As a filian user, compile Fil-C and its standard library:
cd
git clone https://github.com/pizlonator/fil-c.git
cd fil-c
time ./build_all_fast_glibc.sh
[Not necessary if you've used filian-install-compiler:] As root, copy Fil-C and its standard library into system locations:
mkdir -p /usr/libexec/fil/amd64/compiler
time cp -r /home/filian/fil-c/pizfix /usr/libexec/fil/amd64/
rm -rf /usr/lib/x86_64+fil0-linux-gnu
mv /usr/libexec/fil/amd64/pizfix/lib /usr/lib/x86_64+fil0-linux-gnu
ln -s /usr/lib/x86_64+fil0-linux-gnu /usr/libexec/fil/amd64/pizfix/lib
rm -rf /usr/include/x86_64+fil0-linux-gnu
mv /usr/libexec/fil/amd64/pizfix/include /usr/include/x86_64+fil0-linux-gnu
ln -s /usr/include/x86_64+fil0-linux-gnu /usr/libexec/fil/amd64/pizfix/include
time cp -r /home/filian/fil-c/build/bin /usr/libexec/fil/amd64/compiler/
time cp -r /home/filian/fil-c/build/include /usr/libexec/fil/amd64/compiler/
time cp -r /home/filian/fil-c/build/lib /usr/libexec/fil/amd64/compiler/
( echo '#!/bin/sh'
echo 'exec /usr/libexec/fil/amd64/compiler/bin/filcc "$@"' ) > /usr/bin/x86_64+fil0-linux-gnu-gcc
chmod 755 /usr/bin/x86_64+fil0-linux-gnu-gcc
( echo '#!/bin/sh'
echo 'exec /usr/libexec/fil/amd64/compiler/bin/fil++ "$@"' ) > /usr/bin/x86_64+fil0-linux-gnu-g++
chmod 755 /usr/bin/x86_64+fil0-linux-gnu-g++
ln -s /usr/libexec/fil/amd64/compiler/bin/llvm-objdump /usr/bin/x86_64+fil0-linux-gnu-objdump
ln -s x86_64+fil0-linux-gnu-gcc /usr/bin/filcc
ln -s x86_64+fil0-linux-gnu-g++ /usr/bin/fil++
Now, as user filian (or whichever other user), let's make a little helper script to adjust a Debian source package:
mkdir -p $HOME/bin
( echo '#!/bin/sh'
echo 'sed -i '\''s/^ \([^"]*\)$/ pizlonated_\1/'\'' debian/*.symbols'
echo 'find . -name '\''*.map'\'' | while read fn'
echo 'do'
echo ' awk '\''{'
echo ' if ($1 == "local:") global = 0'
echo ' if ($1 == "}") global = 0'
echo ' if (global && NF > 0 && !index($0,"c++")) $1 = "pizlonated_"$1'
echo ' if ($1 == "global:") global = 1'
echo ' print'
echo ' }'\'' < $fn > $fn.tmp'
echo ' mv $fn.tmp $fn'
echo 'done'
echo 'find debian -name '\''*.install'\'' | while read fn'
echo 'do'
echo ' awk '\''{'
echo ' if (NF == 2 && $2 == "usr/include") $2 = $2"/${DEB_HOST_MULTIARCH}"'
echo ' if (NF == 1 && $1 == "usr/include") { $2 = $1"/${DEB_HOST_MULTIARCH}"; $1 = $1"/*" }'
echo ' print'
echo ' }'\'' < $fn > $fn.tmp'
echo ' mv $fn.tmp $fn'
echo 'done'
) > $HOME/bin/fillet
chmod 755 $HOME/bin/fillet
And now let's try building a small package:
mkdir -p ~/shared/packages
cd ~/shared/packages
apt source tinycdb
cd tinycdb-*/
$HOME/bin/fillet
time env DPKG_GENSYMBOLS_CHECK_LEVEL=0 \
DEB_BUILD_OPTIONS='crossbuildcanrunhostbinaries nostrip' \
dpkg-buildpackage -d -us -uc -b -a amd64fil0
Explanation of the differences from a normal build:
For me this worked and produced three ../*.deb packages. Installing them as root also worked:
apt install /home/filian/shared/packages/*.deb
# some sanity checks:
apt list | grep tinycdb
# prints "tinycdb/stable 0.81-2 amd64" (available package)
# and prints "tinycdb/now 0.81-2 amd64fil0 [installed,local]"
dpkg -L tinycdb:amd64fil0
# lists various files such as /usr/bin/cdb
nm /usr/bin/cdb
# shows various symbols including "pizlonated" (Fil-C) symbols
ldd /usr/bin/cdb
# shows dependence on libraries in /usr/libexec/fil
/usr/bin/cdb -h
# prints a help message: "cdb: Constant DataBase" etc.
Compiling a deliberately wrong test program with the newly installed library also works, and triggers Fil-C's run-time protection:
cd /root
( echo '#include <cdb.h>'
echo 'int main() { cdb_init(0,0); return 0; }' ) > usecdb.c
filcc -o usecdb usecdb.c -lcdb
./usecdb < /bin/bash
# ... "filc panic: thwarted a futile attempt to violate memory safety."
libc-dev. Some packages depend on libc-dev, so let's build a fake libc-dev package (probably there's an easier way to do this):
FAKEPACKAGE=libc-dev
mkdir -p ~/shared/packages/$FAKEPACKAGE/debian
cd ~/shared/packages/$FAKEPACKAGE
( echo $FAKEPACKAGE' (0.0) unstable; urgency=medium'
echo ''
echo ' * Initial Release.'
echo ''
echo ' -- djb <djb@cr.yp.to> Sun, 26 Oct 2025 16:05:17 +0000'
) > debian/changelog
( echo 'Source: '$FAKEPACKAGE
echo 'Build-Depends: debhelper-compat (= 13)'
echo 'Maintainer: djb '
echo ''
echo 'Package: '$FAKEPACKAGE
echo 'Architecture: any'
echo 'Multi-Arch: same'
echo 'Description: fake '$FAKEPACKAGE
) > debian/control
( echo '#!/usr/bin/make -f'
echo ''
echo 'build-arch build-indep build \'
echo 'install-arch install-indep install \'
echo 'binary-arch binary-indep binary \'
echo ':'
echo 'Xdh $@' | tr X '\011'
echo ''
echo 'clean:'
echo 'Xdh_clean' | tr X '\011'
) > debian/rules
time env DPKG_GENSYMBOLS_CHECK_LEVEL=0 \
DEB_BUILD_OPTIONS='crossbuildcanrunhostbinaries nostrip' \
dpkg-buildpackage -d -us -uc -b -a amd64fil0
ncurses.
mkdir -p ~/shared/packages
cd ~/shared/packages
apt source ncurses
cd ncurses-*/
$HOME/bin/fillet
time env DPKG_GENSYMBOLS_CHECK_LEVEL=0 \
DEB_BUILD_OPTIONS='crossbuildcanrunhostbinaries nostrip' \
dpkg-buildpackage -d -us -uc -b -a amd64fil0
rm ../ncurses-*deb # apt won't let us touch the binaries
As root, install the above libraries:
apt install /home/filian/shared/packages/lib*.deb
libmd. Seems to work. At first this didn't install since the compiled version (for amd64fil0) was 1.1.0-2 while the installed version (for amd64) was 1.1.0-2+b1. Debian requires the same version number across architectures (see above regarding include-file compatibility), so apt said that 1.1.0-2+b1 breaks 1.1.0-2. I resolved this by compiling and installing 1.1.0-2 for both amd64 and amd64fil0. This is a downgrade since "+b" refers to a "binNMU", a "binary-only non-maintainer upload", a patch beyond the official source; I don't know what the patch is.
readline. Needs ln -s /usr/include/readline /usr/include/x86_64+fil0-linux-gnu/readline after installation. Could have tweaks in debian/rules (which seems to predate *.install), but this is in any case an example of the messiness that I'm planning to get rid of.
lua5.4. Seems to work. Depends on readline.
Not what I meant. Getting software into 5 different distros and waiting years for it to be available to users is not really viable for most software authors.
> best case scenario since there is very little memory management involved and runtime is dominated by computation in tight loops.
This describes most C programs and many, if not most, C++ programs. Basically, this is how C/C++ code is being written, by avoiding memory management, especially in tight loops.Fil-C has its drawbacks, but they should be described carefully, just with any technology.
But you know what might work?
Take current DuckDB, compile it with Fil-C, and use a new escape hatch to call out to the tiny unsafe kernels that do vectorized high-speed columnar data operations on fixed memory areas that the buffers safe code set up on behalf of the unsafe kernels. That's how it'd probably work if DuckDB were implemented in Rust today, and it's how it could be made to work with Fil-C without a major rewrite.
Granted, this model would require Fil-C's author to become somewhat less dogmatic about having no escape hatches at all whatsoever, but I suspect he'll un-harden his heart as his work gains adoption and legitimate use-cases for an FFI/escape hatch appear.