FreeBSD's native support for ZFS snapshots and jails
provides a powerful foundation for immutable deployments.
I have not used the article's tool(s) and am not comparing the functionality provided by each. I have used ezjail[0] and found it exceptionally useful for similar concerns.https://github.com/fsmv/daemon/
It's a bad time for me to be mentioning it because I have a major update that's not quite ready to release that changes some client APIs and makes the whole thing much nicer with fully automatic lets encrypt. I haven't had the space to work on it for a while unfortunately.
Also found this self host on a Raspberry Pi helpful.[1]
[0] https://bastillebsd.org/getting-started/
[1] https://www.sharpwriting.net/project/bastille-jail-managemen...
That was in 2007 so the control plane (scheduler and automation) were built from scratch and we had very few reference points for the overall design. If I was building that today I’d probably still use ZFS clones but at filesystem level instead of block devices, and serve jails over NFS if I can get away with it, the iSCSI part was always a little janky.
Nothing fancy, just VNET jails based on ZFS templates (vanilla FreeBSD rootfs) and epair interfaces (which I truck to various VLANs on the host's egress interface).
One pattern that I've found useful is to give each jail a persistently delegated ZFS dataset called "data." This lets me reprovision the OS image for the jail without having to backup and restore its application data (such as a Postgres DB). It also allows each jail to manage its own ZFS snapshots.
The only thing that was a bit hairy was generating unique interface names and MAC addresses for each jail's VNET interface. My first instinct was to derive the interface name from the jail name, but interface names on FreeBSD are limited to 15 characters, and occasionally I'd hit this limit.
In the end I did some dark magic using md5 sums of the jail name / host interface MAC address. Kind of ugly but I really didn't want to introduce any dependencies besides /bin/sh.
[0] https://github.com/cullumsmith/infrastructure/blob/master/fi...
[1]: https://github.com/opencontainers/runtime-spec/pull/1286 [2]: https://github.com/samuelkarp/runj
ZFS has been stable in FreeBSD for something like 17 years, and FreeBSD jails have been around for something like 25 years.
By the time Docker hit 1.0 (about 11 years ago), the use of snapshots and jails had already been normal parts of life in the FreeBSD space for over half of a decade.
FreeBsd has jail managers aka container managers aka “Docker” as well.
Author of the article seem to know what they are doing so I'm puzzled why they don't use `bsdinstall jail /path/to/jail` to implement basejail instead of manually unpacking archives.
No need for separate custom rc script to start `lo1`, it can be done with `cloned_interfaces` directive in rc.conf.
Updating and upgrading jails by passing `-b /path/to/jail` to `freebsd-update` works, but new recommended way has lately been `-j <jailname>`.
Cool article overall, the beauty of FreeBSD is also in possibility to do things in many different ways.
But at the same time, the reason Docker won was not because it was groundbreaking tech or because it was amazingly well tested or anything. Just as one example, it has a years old bug which actively gets more comments every week having to do with Docker grossly mishandling quotes in env files.
No, the reason it won is because the development experience and the deploy experience is easy, especially when you are on Linux AND on macOS. I can’t run FreeBSD jails or ZFS on macOS, can I? Definitely not with one file and one command.
Jails and ZFS are amazing tech but they are not accessible. Docker made simple things very simple while being cross-platform enough. Do I feel gross using it? Yeah. It’s a kludgy solution to the problem. But it gets the job done and is supported by every provider out there. I am excited that it is being ported to FreeBSD though I know it will be a very long process.
On macOS, docker actually launches a Linux VM to run containers. If this counts, then yes, you can run FreeBSD jails or zfs on macOS, by running a FreeBSD VM.
You’ll be sacrificing a lot and have to hand-roll a lot if you want your organization to switch from Linux+docker to FreeBSD+jails
But that has nothing to do with their respective UXs. It's a Linux vs FreeBSD signal.
Who couldn’t become famous with something like a $200M budget?
Feel like they spent it on marketing instead.
Podman is arguably technically superior yet people stay with Docker out of habit…
You've got a good take on things, and I do not disagree with what you've written.
Same thing could be done in FreeBSD. Someone needs to put in the work…
There is work ongoing to try to make this more native on FreeBSD (by using Linux jails) but that work is not complete yet.
So, if you want to get the same kind of experience as Docker on FreeBSD, you are forced to use jails.
The only reason Docker seems accessible is because it's native to the platform people seem to like for running all their services, but if you're dealing with FreeBSD, you most certainly would not just "use Docker" to deploy your stuff. Because you would get worse performance than if you had just used Linux.
So the answer to "Isn't this just Docker with extra steps?" is truly and absolutely "No". Not because of some kind of old man shouting at cloud argument, but because if you are on FreeBSD (for whatever reason that might be) you can't just use Docker as an easier replacement for Jails (at least right now).
I also find myself using nspawn just to isolate apps like firefox, etc.
I dislike the implementation but I cannot deny that the UX is good enough to be very popular.
I imagine FreeBSD could do something similar if they aren't already. IIRC FreeBSD has a Linux emulation layer (but I don't know how much attention it still gets), and it's had containerization primitives longer than linux, so some amount of filling in the gaps in containerization features and syscall tables (if needed) could possibly yield an OCI compatibility later (for all I know all this already exists).
The problem, and the reason if this doesn't exist why people probably weren't as interested in doing the work, is it would always be "mostly" compatible and working and there would be no guarantee that the underlying software wouldn't exhibit bugs or weird behavior from small behavior differences in the platform when emulating something else. Why open yourself up to the headache when you can just run Linux with containers or build what you want on FreeBSD with jails and their own native containerization primitives.
No it wasn’t. Docker was late to the party even for Linux (and Linux was late compared to every other “UNIX”).
OpenVZ was around for years before docker. Its main issue was that it required out-of-tree kernel code. But there were some distributions that did still ship OpenVZ support. In fact it’s what Proxmox originally used. And it worked very well.
Then LXC came along. Fun fact, Docker was originally used LXC itself. But obviously that was a long time ago too.
I’ve used both OpenVZ and LXC in production systems before Docker came along. But I’ve always preferred FreeBSD Jails + ZFS to anything Linux has offered.
You would need to do more work yourself to fetch and run jails probably, and I don't know if there's a hosted repository of 'jail images', but in return, you'd probably have a nicer system (at least, I'd like such a system more than running containers on google container optimized linux)
Docker's killer selling point was that it solved a very common and specific developer problem, not that it provided operational improvements over the state of the art on Linux. From an operational perspective, Docker has generally been a downgrade compared to LXC. (I say this as a maintainer of runc, the container runtime that underpins Docker, and as someone who worked a lot on Docker back in the early days and somewhat less today.)
November 6, 2025
FreeBSD’s native support for ZFS snapshots and jails provides a powerful foundation for immutable deployments. By creating a new jail from a ZFS snapshot for every release, we get instant roll‑backs, zero‑downtime upgrades, and a clean, reproducible environment. This article walks through the (very opinionated) flow that we use. From jails setup through running Caddy as a health‑checked reverse proxy in front of the jails.
FreeBSD 14+ (or the latest stable release) offers the necessary ZFS and jail primitives. Enabling ZFS with a zpool installed allows cheap, instant cloning. The Caddy v2 binary handles TLS, reverse-proxying, and health checks.
+--------------------+ +------------------------+ +-------------------+
| | | | | |
| Caddy (reverse | <-> | Immutable Jails | <-> | Application |
| proxy & health- | | (ZFS snapshot/clone) | | inside each jail |
| check) | | | | |
| | | | | |
+--------------------+ +------------------------+ +-------------------+
Create a new loopback network interface for the jails. We'll use 172.16.0.0/12 which means jails can use any IP address within the range 172.16.0.1 – 172.31.255.254. Then create a new service to manage the loopback interface via a file at '/usr/local/etc/rc.d/lo1' with the following content:
#!/bin/sh
# PROVIDE: lo1
# REQUIRE: NETWORKING
# BEFORE: jail
# KEYWORD: shutdown
. /etc/rc.subr
name="lo1"
command="ifconfig"
start_cmd="${command} ${name} create && ${command} ${name} inet 172.16.0.1 netmask 255.240.0.0 up"
stop_cmd="${command} ${name} down"
run_rc_command "$1"
Then make the service start at boot and enable it:
chmod +x /usr/local/etc/rc.d/lo1
sysrc lo1_enable="YES"
service lo1 start
Now we can go onwards to enabling jails:
sysrc jail_enable="YES"
sysrc jail_parallel_start="YES"
Create a /etc/jail.conf file with the below configurations so that it includes the configurations for each jail.
NOTE: Each jail configuration should be placed in a separate file in '/etc/jail.conf.d/'.
NOTE: The leading '.' before include is required.
.include "/etc/jail.conf.d/*.conf";
Create a ZFS dataset mount point and paths for the jails:
zfs create -o mountpoint=/usr/local/jails zroot/jails
Create child datasets for the jails:
# Contains the compressed files of the downloaded userlands.
zfs create zroot/jails/media
# Will contain the templates.
zfs create zroot/jails/templates
# Will contain the containers.
zfs create zroot/jails/containers
Download the base FreeBSD image and unpack it:
# Set environment variable for the FreeBSD version. Note that the cut is to remove the patch level.
export FREEBSD_VERSION=$(freebsd-version | cut -d- -f1-2)
zfs create -p zroot/jails/templates/$FREEBSD_VERSION
fetch https://download.freebsd.org/ftp/releases/$(uname -m)/$FREEBSD_VERSION/base.txz -o /usr/local/jails/media/$FREEBSD_VERSION-base.txz
tar -xf /usr/local/jails/media/$FREEBSD_VERSION-base.txz -C /usr/local/jails/templates/$FREEBSD_VERSION --unlink
Copy critical files to the image template:
cp /etc/resolv.conf /usr/local/jails/templates/$FREEBSD_VERSION/etc/resolv.conf
cp /etc/localtime /usr/local/jails/templates/$FREEBSD_VERSION/etc/localtime
Update the image template to the latest patch level.
freebsd-update -b /usr/local/jails/templates/$FREEBSD_VERSION fetch install
Finally, create a ZFS snapshot of the base image template. From this snapshot we we'll use ZFS clones to create new jails.
zfs snapshot zroot/jails/templates/$FREEBSD_VERSION@base
Check which ip addresses on the 'lo1' loopback interface are in use so that we can assign an available ip address to the new jail.
ifconfig lo1 | grep 'inet ' | awk '{print $2}'
Lookup the git repo commit hash for the latest commit.
git ls-remote https://github.com/yourusername/mygitrepo.git | head
Clone the base image template to create a new jail. We'll be creating a new jail within our git repo path.
export FREEBSD_VERSION=$(freebsd-version | cut -d- -f1-2)
export JAIL_NAME=mygitrepo_gitSHA
zfs clone zroot/jails/templates/$FREEBSD_VERSION@base zroot/jails/containers/$JAIL_NAME
Create a config file for the jail to be located at '/etc/jail.conf.d/$JAIL_NAME.conf'.
We name the jail using the SHA of the git commit that we're deploying.
mygitrepo_gitSHA {
# STARTUP/LOGGING
exec.start = "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
exec.consolelog = "/var/log/jail_console_${name}.log";
# PERMISSIONS
allow.raw_sockets;
exec.clean;
mount.devfs;
# HOSTNAME/PATH
host.hostname = "${name}";
path = "/usr/local/jails/containers/${name}";
# NETWORK. We're using the lo1 loopback interface that we created for jails to use.
interface = lo1;
ip4.addr = 172.16.0.2; # Use an available ip address within the range of the lo1 interface. You can find available ip addresses by running "ifconfig lo1 | grep 'inet ' | awk '{print $2}'"
}
Start the jail.
service jail start $JAIL_NAME
Confirm that the jail's ipaddress is within the range of the lo1 interface:
jexec $JAIL_NAME ifconfig lo1 | awk '/inet /{print $2}'
Confirm that the jail is up and what it's running:
jls
jexec $JAIL_NAME ps aux
Here is the proof of concept Go hello world binary that we'll run as a service within the jail.
// main.go
package main
import (
"fmt"
"log"
"net/http"
)
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello World!")
})
http.HandleFunc("/up", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
})
log.Fatal(http.ListenAndServe(":8080", nil))
}
Build the binary and place it in the jail's bin directory.
go build main.go
mkdir -p /usr/local/jails/containers/$JAIL_NAME/usr/local/bin
cp main /usr/local/jails/containers/$JAIL_NAME/usr/local/bin/main
Create a service file for the binary.
#!/bin/sh
#
# PROVIDE: main
# REQUIRE: LOGIN
# KEYWORD: shutdown
. /etc/rc.subr
name="main"
rcvar="main_enable"
# Path to your Go binary
command="/usr/local/bin/main"
pidfile="/var/run/${name}.pid"
# Redirect output to a log file
logfile="/var/log/${name}.log"
# How to start the process
start_cmd="${name}_start"
stop_cmd="${name}_stop"
main_start() {
echo "Starting ${name}..."
daemon -p "${pidfile}" -f -o "${logfile}" "${command}"
}
main_stop() {
echo "Stopping ${name}..."
if [ -f "${pidfile}" ]; then
kill "$(cat ${pidfile})" && rm -f "${pidfile}"
else
echo "No pidfile found; process may not be running."
fi
}
load_rc_config $name
: ${main_enable:="NO"}
run_rc_command "$1"
Copy the service file to the jail's /etc/rc.d directory and enable it.
mkdir -p /usr/local/jails/containers/$JAIL_NAME/usr/local/etc/rc.d
cp /usr/local/etc/rc.d/main /usr/local/jails/containers/$JAIL_NAME/usr/local/etc/rc.d/main
jexec $JAIL_NAME chmod +x /usr/local/etc/rc.d/main
jexec $JAIL_NAME sysrc main_enable=YES
jexec $JAIL_NAME service main start
Setup log rotation so they don't fill up the disk, and do the initial rotation.
jexec $JAIL_NAME sh -c "echo '/var/log/main.log root:wheel 644 5 100 * Z /var/run/main.pid' >> /etc/newsyslog.conf.d/main.conf"
jexec $JAIL_NAME newsyslog -vF
Confirm the service is running.
jexec $JAIL_NAME service main status
curl 172.16.0.2:8080 # Use the ip address of the jail.
Add a 'service', or similar, group to the system if it doesn't already exist. This group should have permissions to write to the pid and log files. Make sure to use the same group in the next step when we create a user.
pw groupadd service
chown root:service /var/run
chown root:service /var/log
chmod 770 /var/run
chmod 770 /var/log
Add a user and assign permissions. Make sure to add the user without login capabilities and assign to the 'service' group.
pw useradd caddy -d /nonexistent -s /sbin/nologin -c "Caddy Service Account" -g service
Note: We're running Caddy behind a Cloudflare Tunnel on port 8080. If you're not and using a port below 1024 then you'll need to setup security/portacl-rc to enable privileged port binding, and configure for user 'caddy'. This will allow the caddy user to bind to ports below 1024.
pkg install security/portacl-rc sysrc portacl_users+=caddy sysrc portacl_user_caddy_tcp="http https" sysrc portacl_user_caddy_udp="https" service portacl enable service portacl start
Install Caddy.
cd /usr/ports/www/caddy
make install clean
Change the ownership of the caddy binary and required files to the caddy user.
chown caddy:service /usr/local/bin/caddy
chmod 740 /usr/local/bin/caddy
chown -R caddy:service /var/log/caddy
chown -R caddy:service /usr/local/etc/caddy
chown -R caddy:service /var/db/caddy
Setup log rotation so they don't fill up the disk.
echo '/var/log/caddy.log root:wheel 644 5 100 * Z /var/run/caddy.pid' >> /etc/newsyslog.conf.d/caddy.conf
newsyslog -vF
Add the caddy service to the system startup and make sure it runs as the caddy user.
sysrc -f /etc/rc.conf caddy_enable="YES"
sysrc -f /etc/rc.conf caddy_user="caddy"
sysrc -f /etc/rc.conf caddy_group="service"
Caddy reads the configuration file at '/usr/local/etc/caddy/Caddyfile'.
Inside the jail, '/up' returns '200 OK' when healthy.
Caddy polls the specified health‑check endpoint using the healthcheck directive, routing traffic exclusively to backends that return a successful health check.
Important: We're only disabling automatic HTTPS because we're running behind a Cloudflare Tunnel. If that's not the case, you should enable automatic HTTPS by removing the 'auto_https off' line.
# /usr/local/etc/caddy/Caddyfile
{
auto_https off # Note: Disable automatic HTTPS since we're running behind a Cloudflare Tunnel.
}
:8080 {
# Matcher and reverse proxy for serviceA.null.live.
@serviceA host serviceA.null.live # Change the hostname to your actual hostname.
reverse_proxy @serviceA 172.16.0.2:8080 {
health_uri /up
health_interval 10s
health_timeout 5s
}
# Matcher and reverse proxy for serviceB.null.live.
@serviceB host serviceB.null.live # Change the hostname to your actual hostname.
reverse_proxy @serviceB 172.16.0.3:8080 {
health_uri /up
health_interval 10s
health_timeout 5s
}
}
Create a config file for the jail to be located at '/etc/jail.conf.d/$JAIL_NAME.conf'.
Make sure to replace the ip4.addr variable value with the next available ip address.
ifconfig lo1 | grep 'inet ' | awk '{print $2}'
mygitrepo_gitSHA {
# STARTUP/LOGGING
exec.start = "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
exec.consolelog = "/var/log/jail_console_${name}.log";
# PERMISSIONS
allow.raw_sockets;
exec.clean;
mount.devfs;
# HOSTNAME/PATH
host.hostname = "${name}";
path = "/usr/local/jails/containers/${name}";
# NETWORK. We're using the lo1 loopback interface that we created for jails to use.
interface = lo1;
ip4.addr = 172.16.0.3; # Use the ip address we found in the previous step.
}
Create a new jail. We name our jail using the format: mygitrepo_gitSHA. For the repo of the application being deployed. This makes it easy to track which version of the application is running in each jail. The last line is used to confirm the jail is running.
git ls-remote https://github.com/yourusername/mygitrepo.git | head
export FREEBSD_VERSION=$(freebsd-version | cut -d- -f1-2)
export JAIL_NAME=mygitrepo_gitSHA
export SERVICE_NAME=conradresearchcom # Note: '-' are not allowed in service names.
zfs clone zroot/jails/templates/$FREEBSD_VERSION@base zroot/jails/containers/$JAIL_NAME
# Copy the binary of the application to the jail. We'll use our 'main' demo app from previous steps.
go build main.go
mkdir -p /usr/local/jails/containers/$JAIL_NAME/usr/local/bin
cp $SERVICE_NAME /usr/local/jails/containers/$JAIL_NAME/usr/local/bin/$SERVICE_NAME
# Copy the rc.d script to the jail.
mkdir -p /usr/local/jails/containers/$JAIL_NAME/usr/local/etc/rc.d
cp $SERVICE_NAME /usr/local/jails/containers/$JAIL_NAME/usr/local/etc/rc.d/$SERVICE_NAME
# Start the jail.
service jail start $JAIL_NAME
jexec $JAIL_NAME chmod +x /usr/local/etc/rc.d/$SERVICE_NAME
jexec $JAIL_NAME sysrc ${SERVICE_NAME}_enable=YES
jexec $JAIL_NAME service $SERVICE_NAME start
while ! curl -s -o /dev/null -w "%{http_code}" http://172.16.0.3:8080/up; do sleep 1; done
Using your favorite text editor, update the Caddy configuration at '/usr/local/etc/caddy/Caddyfile' to point to the new jail via updating the jail's IP address to the new jail's IP address. Then run the following command to reload Caddy:
service caddy reload
By combining ZFS snapshots, FreeBSD jails, and a Caddy reverse‑proxy, you get:
Give it a try, tweak the scripts for your own stack, and enjoy the peace of mind that comes with immutable infrastructure.
Cheers 🥂