https://neosmart.net/blog/zfs-on-linux-quickstart-cheat-shee...
I've just rebuilt my little home server (mostly for samba, plus a little bit of docker for kids to play with). It has a hardware raid1 enclosure, with 2TB formatted as ext4, and the really important stuff is sent to the cloud every night. Should I honestly bother learning zfs...? I see it popping up more and more but I just can't see the benefits for occasional use.
It's a bit sad that this Lenovo ThinkCentre ain't using ECC. I use and know ZFS is good but I'd prefer to run it on a machine supporting ECC.
I never tried FreeBSD but I'm reading more and more about it and it looks like although FreeBSD has always had its regular users, there are now quite some people curious about trying it out. For a variety of reasons. The possibility of having ZFS by default and an hypervisor without systemd is a big one for me (I run Proxmox so I'm halfway there but bhyve looks like it'd allow me to be completely systemd free).
I'm running systemd-free VMs and systemd-free containers (long live non-systemd PID ones) so bhyve looks like it could the final piece of the puzzle to be free of Microsoft/Poettering's systemd.
* https://klarasystems.com/articles/managing-boot-environments...
* https://wiki.freebsd.org/BootEnvironments
* https://man.freebsd.org/cgi/man.cgi?query=bectl
* https://dan.langille.org/category/open-source/freebsd/bectl/
* https://vermaden.wordpress.com/2022/03/14/zfs-boot-environme...
It lets you patch/upgrade an isolated environment without touching the running bits, reboot into that environment, and if things aren't working well boot back into the last known-good one.
I've lost work and personal data to bit rot in NAS filesystems before. Archived VM images wouldn't boot anymore after months in storage. Multiple vacation photos became colorful static part way through on disk due to a bit flip in the middle of the JPEG stream. I've had zero issues since switching to ZFS (even without ECC.)
Another huge benefit of ZFS is the copy-on-write (CoW) snapshots, which saved me many times as an IT administrator. It was effortless to restore files when users accidentally deleted them, and recovering from a cryptolocker type attack is also instant. Without CoW, snapshots are possible, but they're expensive and slow. I saw a 20-user office try to snapshots on their 30TB Windows Server NAS hoping to avoid having to revert to tape backups to recover the occasional accidentally deleted file. While hourly snapshots would have been ideal, the NAS only had room for only two snapshots, and would crawl to a halt while it created them. But ZFS's performance won't suffer if you snapshot every minute.
When it's time to backup, ZFS' send/recv capability means you only ever move the differences when backing up, and they're pre-computed so you don't have to re-index an entire volume to determine that you only need to move 124KB, making small transfers are lightning fast. Once backup completes, you have verified that the snapshot on both sides is bit-for-bit identical. While this is the essential property of a backup, most filesystems cannot guarantee it.
ZFS has become a hard requirement for any storage system I build/buy.
I'd argue that it's better for minimizing sysadmin work than the alternatives. Running a scrub, replacing a disk, taking a snapshot, restoring a snapshot, sending a snapshot somewhere (read: trivial incremental backups), etc. are all one command, and it's easy to work with.
> I've just rebuilt my little home server (mostly for samba, plus a little bit of docker for kids to play with). It has a hardware raid1 enclosure, with 2TB formatted as ext4, and the really important stuff is sent to the cloud every night. Should I honestly bother learning zfs...? I see it popping up more and more but I just can't see the benefits for occasional use.
The reason I personally would prefer it in that situation is that I don't really trust the layers under the filesystem to protect data from corruption or even to notice when it's corrupted. If you're sufficiently confident that your hardware RAID1 will always store data correctly and never mess it up, then it's close enough. (I wouldn't trust it, but that's me.) At that point, the only benefit I see to ZFS would be snapshots; an incremental `zfs send` is more efficient than however else you're syncing to the cloud.
Snapshots on ZFS are extremely cheap, since it works on the block level, so snapshots every hour or even 15 minutes are now doable if you so wish. Combine with weekly or monthly snapshots that can be replicated off-site, and you have a pretty robust storage system.
This is all home sysadmin stuff to be sure, but even if you just use it as a plain filesystem, the checksum integrity guarantees are worth the price of admission IMO.
FWIW, software RAID like ZFS mirrors or mdm is often superior to hardware raid especially for home use. If your raid controller goes blooey, which does happen, unless you have the exact same controller to replace it, you run a chance of not being able to mount your drives. Even very basic computers are fast enough to saturate the drives in software these days.
Backups using zfs snapshots are pretty nice; you can pretty easily do incremental updates. zfs scrub is great to have. FreeBSD UFS also has snapshots, but doesn't have a mechanism to check data integrity: fsck checks for well formed metadata only. I don't think ext4 has snapshots or data integrity checking, but I haven't looked at it much.
There are articles and people claiming you need ECC to run zfs or that you need an unreasonable amount of memory. ECC is nice to have, but running ZFS without ECC isn't worse than running any other filesystem without ECC; and you only really need a large amount of ram if you run with deduplication enabled, but very few use cases benefit from deduplication, so the better advice is to ensure you don't enable dedup. I wouldn't necessarily run zfs on something with actually small memory like a router, but then those usually have a specialized flash filesystem and limited writes anyway.
So: "I copied the data and didn't really look at it much." and it ended up being corrupt,
is different from: "I promise I proved this is solid with math and logic." and it ended up being corrupt, complete with valid checksum that "proves" it's not corrupt.
A zfs scrub will actually destroy good data thanks to untrustworthy ram.
https://tadeubento.com/2024/aarons-zfs-guide-appendix-why-yo... "So roughly, from what Google was seeing in their datacenters, 5 bit errors in 8 GB of RAM per hour in 8% of their installed RAM."
It's not true to say that "Well all filesystem code has to rely on ram so it's all the same."
a lot of people parrot this, but you can always just check for yourself. the in-memory size of the dedupe tables scales with total writes to datasets with deduplication enabled, so for lots of usecases it makes sense to enable it for smaller datasets where you know it'll be of use. i use it to deduplicate fediverse media storage for several instances (and have for years) and it doesn't come at a noticeable ram cost.
Solaris 11 made boot environments a mandatory part of the OS, which was an obvious choice with the transition from UFS to ZFS for the root fs. This came into Solaris development a bit before Solaris 11, so it was present in OpenSolaris and lives on in many forms of illumos.
* https://man.freebsd.org/cgi/man.cgi?query=bectl#end
> beadm(1M) originally appeared in Solaris.
* https://man.freebsd.org/cgi/man.cgi?query=beadm#end
Solaris Live Upgrade BEs worked with (mirrored) UFS root:
* https://docs.oracle.com/cd/E18752_01/html/821-1910/chapter-5...
* https://www.filibeto.org/sun/lib/solaris8-docs/_solaris8_2_0...
It allowed/s for migration from UFS to ZFS root:
* https://docs.oracle.com/cd/E23823_01/html/E23801/ggavn.html
It definitely worth the hassle. But if everything works fine for you now, don't bother. ZFS is not going away and you can learn it later.
It happens by default with freebsd-update (I hope the new pkg replacement still does it too)
Yes. Also: what hazzle? It's in many ways simpler than alternatives.
I'll take ZFS without ECC over hardware RAID with ECC any day.
"RAID" site:news.ycombinator.com
This has its own problems.- https://vermaden.wordpress.com/2025/11/25/zfs-boot-environme...
This works, regardless of if you have ram errors or not.
I will say that the reported error rate of 5 bit errors per 8 GB per hour in 8% of installed RAM seems incredibly high compared to my experience running on a fleet of about one to three thousand machines with 64-768 GB of ECC RAM. Based on that rate, assuming a thousand machines with 64 GB ram each, we should have been seeing about 3000 bit errors per hour; but ECC reports were rare. Most machines went through their 3-5 year life without reporting any correctable errors. Of the small handful of machines that had errors, most of them went from no errors to a concerning amount of errors in a short time and were shut down to have their ram replaced; a few threw uncorrectable errors, most of those threw a second uncorrectable shortly thereafter and had their ram replaced; there was one or two that would do about one correctable error per day and we let those run. There was one, maybe two that were having so many correctable errors that the machine check exceptions caused operational problems that didn't make sense until the hourly ECC report came up with a huge number.
The real tricky one without ECC is that one bit error a day case... that's likely to corrupt data silently, without any other symptoms. If you have a lot of bit errors, chances are the computer will operate poorly; you'll probably end up with some corrupt data, but you'll also have a lot of crashing and hopefully run a memtest and figure it out.
Nice usecase. What kind of overhead and what kind of benefits do you see?
NixOS and Guix use a concept called 'system generations' to do the same without the support of the filesystem. LibOSTree can do the same and is called 'atomic rollback'.
Talking about NixOS, does anybody know of a similar concept in the BSD world (preferably FreeBSD)?
Well, there's https://github.com/nixos-bsd/nixbsd :)
Click to rate this post!
[Total: 0 Average: 0]
I have an idea to set up a home NAS on FreeBSD.
For this purpose, I bought a Lenovo ThinkCentre M720s SFF – it’s quiet, compact, and offers the possibility to install 2 SATA III SSDs plus a separate M.2 slot for an NVMe SSD.
What is planned:
While waiting for the drives to arrive, let’s test how it all works on a virtual machine.
We will be installing FreeBSD 14.3, although version 15 is already out, but it has some interesting changes that I’ll play with separately.
Of course, I could have gone with TrueNAS, which is based on FreeBSD – but I want “vanilla” FreeBSD to do everything manually.
All posts in this blog series:
Contents
We will perform the installation over SSH using bsdinstall – boot the system in LiveCD mode, enable SSH, and then proceed with the installation from a workstation laptop.
The virtual machine has three disks – mirroring the future ThinkCentre setup:
Select Live System:
Login as root:
Bring up the network:
# ifconfig em0 up
For SSH, we need to set a root password and make changes to /etc/ssh/sshd_config, but currently, this doesn’t work because the system is mounted as read-only:
Check the current partitions:
And apply a “dirty hack”:
tmpfs file system in RAM at /mnt/etc from the LiveCD theretmpfs over /etc (overlaying the read-only directory from the ISO)/mnt back into the new /etcExecute:
# mount -t tmpfs tmpfs /mnt
The mount syntax for tmpfs is mount -t <fstype> <source> <mountpoint>. Since the source value is required, we specify tmpfs again.
Now, set the password with passwd and start sshd using onestart:
# passwd
However, SSH will still deny access because root login is disabled by default:
$ ssh [email protected] ([email protected]) Password for root@: ([email protected]) Password for root@: ([email protected]) Password for root@:
Set PermitRootLogin yes in /etc/ssh/sshd_config and restart sshd:
# echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
Now we can log in:
$ ssh [email protected] ([email protected]) Password for root@: Last login: Sun Dec 7 12:19:25 2025 FreeBSD 14.3-RELEASE (GENERIC) releng/14.3-n271432-8c9ce319fef7
Welcome to FreeBSD! ...
root@:~ #
bsdinstallRun bsdinstall:
# bsdinstall
Select the components to add to the system – ports is necessary, src is optional but definitely worth it for a real NAS:
We’ll do a minimal disk partition, so select Manual:
We will install the system on ada0, select it, and click Create:
Next, choose a partition scheme. It’s standard for 2025 – GPT:
Confirm the changes, and now we have a new partition table on the system drive ada0:
freebsd-boot PartitionNow we need to create the partitions themselves.
Select ada0 again, click Create, and create a partition for freebsd-boot.
This is just for the virtual machine; on the actual ThinkCentre, we would use type efi with a size of about 200-500 MB.
For now, set:
freebsd-bootConfirm and proceed to the next partition.
freebsd-swap PartitionClick Create again to add Swap.
Given that on the ThinkCentre we will have:
2 gigabytes will be enough.
Set:
freebsd-swapThe main system will be on UFS because it is very stable, doesn’t require much RAM, mounts quickly, is easy to recover, and lacks complex caching mechanisms (UPD: however, after getting to know ZFS and its capabilities better, I decided to use it for the system disk as well)
Set:
freebsd-ufsWe’ll configure the rest of the disks later; for now, select Finish and Commit:
Wait for the copying to complete:
Configure the network:
Select Timezone:
In System Configuration – select sshd, no mouse, enable ntpd and powerd:
System Hardening – considering this will be a home NAS, but I might open external access (even behind a firewall), it makes sense to tune the security a bit:
read_msgbuf: allow dmesg access for root onlyproc_debug: allow ptrace for root onlyrandom_pid: randomize PID numbersclear_tmp: clear /tmp on rebootsecure_console: require root password for login from the physical consoleAdd a user:
Everything is ready – reboot the machine:
Log in as the regular user:
$ ssh [email protected] ... FreeBSD 14.3-RELEASE (GENERIC) releng/14.3-n271432-8c9ce319fef7 Welcome to FreeBSD! ... setevoy@test-nas-1:~ $
Install vim 🙂
# pkg install vim
Check our disks.
Using geom disk for physical device info, and gpart show to see partitions on the disks.
Check disks – there are three:
root@test-nas-1:/home/setevoy # geom disk list Geom name: ada0 Providers:
Geom name: ada1 Providers:
Geom name: ada2 Providers:
And with gpart – current ada0 where the system was installed:
root@test-nas-1:/home/setevoy # gpart show => 40 33554352 ada0 GPT (16G) 40 1024 1 freebsd-boot (512K) 1064 4194304 2 freebsd-swap (2.0G) 4195368 29359024 3 freebsd-ufs (14G)
Disks ada1 and ada2 will be used for ZFS and its mirror (RAID1).
If there was anything on them – wipe it:
root@test-nas-1:/home/setevoy # gpart destroy -F ada1 gpart: arg0 'ada1': Invalid argument root@test-nas-1:/home/setevoy # gpart destroy -F ada2 gpart: arg0 'ada2': Invalid argument
Since this is a VM and the disks are empty, “Invalid argument” is expected and fine.
Create GPT partition tables on ada1 and ada2:
root@test-nas-1:/home/setevoy # gpart create -s gpt ada1 ada1 created root@test-nas-1:/home/setevoy # gpart create -s gpt ada2 ada2 created
Check:
root@test-nas-1:/home/setevoy # gpart show ada1 => 40 33554352 ada1 GPT (16G) 40 33554352 - free - (16G)
Create partitions for ZFS:
root@test-nas-1:/home/setevoy # gpart add -t freebsd-zfs ada1 ada1p1 added root@test-nas-1:/home/setevoy # gpart add -t freebsd-zfs ada2 ada2p1 added
Check again:
root@test-nas-1:/home/setevoy # gpart show ada1 => 40 33554352 ada1 GPT (16G) 40 33554352 1 freebsd-zfs (16G)
zpoolThe “magic” of ZFS is that everything works “out of the box” – you don’t need a separate LVM and its groups, and you don’t need mdadm for RAID.
For managing disks in ZFS, the main utility is zpool, and for managing data (datasets, file systems, snapshots), it’s zfs.
To combine one or more disks into a single logical storage, ZFS uses a pool – the equivalent of a volume group in Linux LVM.
Create the pool:
root@test-nas-1:/home/setevoy # zpool create tank mirror ada1p1 ada2p1
Here, tank is the pool name, mirror specifies that it will be RAID1, and we provide the list of partitions included in this pool.
Check:
root@test-nas-1:/home/setevoy # zpool status pool: tank state: ONLINE config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1p1 ONLINE 0 0 0
ada2p1 ONLINE 0 0 0
errors: No known data errors
ZFS immediately mounts this pool at /tank:
root@test-nas-1:/home/setevoy # mount /dev/ada0p3 on / (ufs, local, soft-updates, journaled soft-updates) devfs on /dev (devfs) tank on /tank (zfs, local, nfsv4acls)
Check partitions now:
root@test-nas-1:/home/setevoy # gpart show => 40 33554352 ada0 GPT (16G) 40 1024 1 freebsd-boot (512K) 1064 4194304 2 freebsd-swap (2.0G) 4195368 29359024 3 freebsd-ufs (14G)
=> 40 33554352 ada1 GPT (16G) 40 33554352 1 freebsd-zfs (16G)
=> 40 33554352 ada2 GPT (16G) 40 33554352 1 freebsd-zfs (16G)
If we want to change the mountpoint – execute zfs set mountpoint:
root@test-nas-1:/home/setevoy # zfs set mountpoint=/data tank
And it immediately mounts to the new directory:
root@test-nas-1:/home/setevoy # mount /dev/ada0p3 on / (ufs, local, soft-updates, journaled soft-updates) devfs on /dev (devfs) tank on /data (zfs, local, nfsv4acls)
Enable data compression – useful for a NAS, see Compression and Compressing ZFS File Systems.
lz4 is the current default option, let’s enable it:
root@test-nas-1:/home/setevoy # zfs set compression=lz4 tank
Since we installed the system on UFS, we need to add a few parameters to autostart for ZFS to work.
Configure the boot loader in /boot/loader.conf to load kernel modules:
zfs_load="YES"
Or, to avoid manual editing, use sysrc with the -f flag:
root@test-nas-1:/home/setevoy # sysrc -f /boot/loader.conf zfs_load="YES"
And add to /etc/rc.conf to start the zfsd daemon and mount the file systems:
root@test-nas-1:/home/setevoy # sysrc zfs_enable="YES" zfs_enable: NO -> YES
Reboot and check:
root@test-nas-1:/home/setevoy # zpool status pool: tank state: ONLINE config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1p1 ONLINE 0 0 0
ada2p1 ONLINE 0 0 0
Everything is in place.
Now you can proceed with further tuning – configuring separate datasets, snapshots, etc.
For a Web UI, you could try Seafile or FileBrowser.
