rEFInd is _so_ much simpler: one efi entry, one text config file in the efi partition, nothing that needs to change when the kernel updates, and no massive pile of templating and moving parts to mysteriously break dumping you at an impenetrable grub “rescue” shell.
… you don't have to update the UEFI entries every time the kernel updates. (I guess you might if you do like a kernel w/ CONFIG_EFI_STUB, and you place the new kernel under a different filename than what the UEFI boot entry point to then you might … but I was under the impression that that'd be kind of an unusual setup, and I thought most of us booting w/ EFI were doing so with Grub.)
I have recently upgraded my house to 10Gbps Ethernet, with only one room still stuck at gigabit, and unfortunately, it's my main office. I'm working on getting the drop there now (literally, just taking a break here).
Even once I'm done, accessing an iSCSI drive over 10GbE will be 4-8 times slower than a local NVMe drive, but it will sure be a lot better than it was!
Ideally, I could run VMs on the NAS and have great performance, but that's another hardware upgrade...
Does anyone have an opinion on iSCSI vs NBD?
Looks like ZFS is only used to store the image on the server, though. I was expecting this to be more interesting because of that.
SFP28 might be cheap enough now too, I'm not sure...
to make this actually work well, consider modifying your switches QoS settings to carve out a priority VLAN for iSCSI traffic
Hmmh? I haven't done so in years, but configuring multi-boot used to be considerably easier than disk-less operation.
I have been waiting for such a feature for like 15 years now. Without it, zfs is just a fad and useless filesystem (all that complexity for NOTHING).
ext2 for the win! still
NVMe-oF is the best protocol with least overhead for network drives, with a proper setup you lose only 10-20% latency compared to local disk even with Intel Optane. Throughput should be almost similar.
https://forums.gentoo.org/viewtopic.php?p=4895771&sid=f9b7ac...
https://github.com/NetworkBlockDevice/nbd/issues/93
Whether that’s the case with the latest version, I don’t know, but it’s something you might test if you choose to try it.
Wouldn't that need a local disk?
You can install a prettier looking boot selection menu like rEFInd, but the default works just as well, and I think the mainstream distros all setup secure boot too. On my pc it was very easy, on my (8yr old) laptop I had to add some secure boot keys and the bios was very confusing, using terms that didn’t seem to match what they should have been.
My setup has worked almost entirely flawlessly and survived updates from both OSes. Only issue being “larger” windows feature updates putting windows back as the first OS in the list, but that happens maybe once or twice a year? And it’s a quick bios change to fix the order.
--
0: https://klarasystems.com/articles/troubleshooting-zfs-common-issues-how-to-fix-them/The Linux NTFS resizing code also has a tendency to trigger data corruption. Not really Linux' fault, but it's a good reason to do partitioning from inside of Windows, which can be a pain already.
Another issue I've run into is Windows creating a very small (~300MiB) EFI partition that barely fits the Windows bootloader, let alone a Linux bootloader and kernel. You can resize and recreate the partition of course, but reconfiguring Windows to use a different boot partition is a special kind of hell I try to avoid.
Then anaconda or whatover os installer picks up and installs the OS in a PXE install sequence when there is a local disk.
... And it's very, very fun.
nbdinfo nbd://server
nbdcopy nbd://server:2001/ nbd+unix:///?socket=/tmp/localsock
https://github.com/NetworkBlockDevice/nbd/blob/master/doc/ur...THe caveat was, you needed readonly root, so that meant freezing the OS, anything that needed changing was either stored in a ram disk (that you need to setup) or a per host nfs area (kinda like overlayfs, but not)
If Linux corrupts someone files, it is 100% Linux's fault and is absolutely unacceptable.
There are some exceptions (some hardware from Microsoft doesn't trust the third party certificate used, for instance, and Red Hat Enterprise has their own root of trust if you opt into that), but they're very rarely ever an issue.
How well does it work in environments with noticeable network latency?
If you needed to update the root dir, you chrooted into it and did the (yum) update.

Qwen3.6 and Gemma4 on my gaming PC. llama.cpp on Windows is tedious to compile, and I have littered my Windows installation with too many toolchains already. Python venvs, Mingw, Cuda, UCRT64 & WSL to name a few. Windows still does not feel developer friendly to me. I think I’m ok with it being a frontend for Steam’s Big Picture mode.Installing Debian on a network drive will indeed be noticeably slower than a native install. Since I’m going to use some portion of my local NVMe drive to store & load the models, I didn’t really care about the OS performance as I have enough RAM to run everything smoothly once the OS has booted up. I won’t be using this for browsing stuff using Firefox.
A single Debian 13 based server is used for Netboot.xyz, tftpd, iSCSI Target & ZFS ZVol. My Proxmox install works perfectly fine for this. I used my Asus Router with the Merlin firmware for DNSMasq.
The post is broken down into the following sections:
I’m using my Proxmox host to export my iSCSI targets. Install the required packages.
apt install apache2 git ansible tftpd-hpa targetcli-fb
Clone & compile netboot. One can use netboot directly without compiling, but then it downloads all the assets at runtime which, although handy, is not something that I would recommend.
cd /opt
git clone https://github.com/netbootxyz/netboot.xyz.git
cd netboot.xyz
We edit a few config files to tailor our netboot install. Edit /opt/netboot.xyz/user_overrides.yml with the below contents:
generate_menus: true
generate_disks: true
generate_checksums: true
generate_local_vars: false
make_num_jobs: 1
site_name: 192.168.50.167
boot_domain: 192.168.50.167
Ensure site_name & boot_domain points to the netboot host. It is the same as the Proxmox host in my case.
Now we fix up some netboot templates so we can boot our installer & iSCSI.
Edit /opt/netboot.xyz/roles/netbootxyz/templates/menu/boot.cfg.j2 — find the :end section and change it to:
:end
chain local-vars.ipxe ||
exit
Edit /opt/netboot.xyz/roles/netbootxyz/templates/local-vars.ipxe.j2 and change it to:
#!ipxe
set custom_url http://192.168.50.167
Use ansible to install netbootxyz to /var/www/html. This can take a while…
ansible-playbook -i inventory site.yml
Now we need to add a custom menu to boot from our disks. If the disk does not have an OS, it will start the Debian installer. If you want to install the OS on multiple machines, feel free to create different ipxe files for the installer & the boot disks. Create /var/www/html/debian13-iscsi.ipxe and change it as below. Make sure the IP addresses & IQNs are correct.
#!ipxe
set iscsi-server 192.168.50.167
set iscsi-target iqn.2026-05.xyz.716697.pve-vt:tank-debian-disk-12700k
set initiator-iqn iqn.2026-05.xyz.716697.pve-vt:12700k
set username myuser
set password mypassword
set reverse-username targetuser
set reverse-password targetpassword
sanboot iscsi:${iscsi-server}::::${iscsi-target} || goto installer
:installer
imgfree
kernel http://${iscsi-server}/assets/debian13/linux
initrd http://${iscsi-server}/assets/debian13/initrd.gz
imgargs linux root=/dev/ram0 initrd=initrd.gz vga=normal
boot
Create the custom netboot.xyz entry. Create a new file /var/www/html/custom.ipxe
#!ipxe
menu Local Custom Menu
item --gap Local iSCSI Installs:
item debian13-iscsi Debian 13 iSCSI Boot (192.168.50.167)
item --gap --
item back Back to main menu
choose menu || goto back
goto ${menu}
:debian13-iscsi
chain http://192.168.50.167/debian13-iscsi.ipxe ||
goto back
:back
chain http://192.168.50.167/menu.ipxe
Download the Debian initrd installer.
mkdir -p /var/www/html/assets/debian13
cd /var/www/html/assets/debian13
wget http://ftp.debian.org/debian/dists/trixie/main/installer-amd64/current/images/netboot/debian-installer/amd64/initrd.gz
wget http://ftp.debian.org/debian/dists/trixie/main/installer-amd64/current/images/netboot/debian-installer/amd64/linux
In case you want the fancy GTK/GUI installer, use as below. This might be an issue if you are using an exotic GPU.
mkdir -p /var/www/html/assets/debian13-gtk
cd /var/www/html/assets/debian13-gtk
wget http://ftp.debian.org/debian/dists/trixie/main/installer-amd64/current/images/netboot/gtk/debian-installer/amd64/initrd.gz
wget http://ftp.debian.org/debian/dists/trixie/main/installer-amd64/current/images/netboot/gtk/debian-installer/amd64/linux
Configure in /etc/default/tftpd-hpa
TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/srv/tftp"
TFTP_ADDRESS=":69"
TFTP_OPTIONS="--secure"
Copy the netboot.xyz binaries we compiled into tftp/ipxe.
mkdir -p /srv/tftp/ipxe
cp /var/www/html/ipxe/netboot.xyz-undionly.kpxe /srv/tftp/ipxe/
cp /var/www/html/ipxe/netboot.xyz-snp.efi /srv/tftp/ipxe/
cp /var/www/html/ipxe/netboot.xyz.efi /srv/tftp/ipxe/
chown -R tftp:tftp /srv/tftp/ipxe
service tftpd-hpa restart
Configure DNSMasq on your default router / DHCP server to redirect to the TFTP Server. I have an Asus router with the Merlin firmware that uses dnsmasq. Custom config goes in: /jffs/configs/dnsmasq.conf.add. Make sure IP is the same as the TFTPD host.
The different sections below are necessary to support PXE & iPXE both. I realized I needed them as my VM supported iPXE but my 12700k did not.
aniket@RT-AX86U-D290:/tmp/home/root# cat /jffs/configs/dnsmasq.conf.add
# BIOS Clients
dhcp-boot=tag:!ipxe,ipxe/netboot.xyz-undionly.kpxe,,192.168.50.167
# UEFI x86-64 clients
dhcp-match=set:efi-x86_64,option:client-arch,7
dhcp-boot=tag:efi-x86_64,ipxe/netboot.xyz-snp.efi,,192.168.50.167
# Tag iPXE clients (option 175 present)
dhcp-match=set:ipxe,175
# All other iPXE clients get the netboot.xyz menu
dhcp-boot=tag:ipxe,http://192.168.50.167/menu.ipxe
Restart dnsmasq:
service restart_dnsmasq
I will be very brief about ZFS. I won’t go into any specifics, apart from the fact that ZFS is cool. There is a ton of literature available elsewhere on how you can create ZFS Pools & ZVols. In lieu of ZFS, iSCSI can very well use any other connected disks.
zpool create tank /dev/disk/by-id/${DISK_ID}
zfs create -V 32G tank/debian-disk-12700k
This is the trickiest part. We export the ZVOL (or any other disk) as an iSCSI target. The below block does the following:
Create iSCSI Backstore with the ZVOL as the block device. Use any other disk if you want to skip ZFS.
Create iSCSI Target for Debian Boot Disk.
Set demo_mode_write_protect=1. This enables write protect for non-authenticated clients.
Set generate_node_acls=0.
Create initiator (client) and corresponding mutual auth.
Create LUN mapping between iSCSI target & ZVOL backstore.
Verify Portal (iSCSI server) exists.
Finally, list the iSCSI config for target.
root@pve-vt:~# targetcli targetcli shell version 2.1.53 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'.
/> cd /backstores/block /backstores/block> create debian-disk-12700k /dev/zvol/tank/debian-disk-12700k Created block storage object debian-disk-12700k using /dev/zvol/tank/debian-disk-12700k.
/backstores/block> cd /iscsi /iscsi> create iqn.2026-05.xyz.716697.pve-vt:tank-debian-disk-12700k Created target iqn.2026-05.xyz.716697.pve-vt:tank-debian-disk-12700k. Created TPG 1. Global pref auto_add_default_portal=true Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi> cd iqn.2026-05.xyz.716697.pve-vt:tank-debian-disk-12700k/ /iscsi/iqn.20...n-disk-12700k> cd tpg1/ /iscsi/iqn.20...k-12700k/tpg1> set attribute demo_mode_write_protect=1 Parameter demo_mode_write_protect is now '1'.
/iscsi/iqn.20...k-12700k/tpg1> set attribute generate_node_acls=0 Parameter generate_node_acls is now '0'.
/iscsi/iqn.20...k-12700k/tpg1> cd acls /iscsi/iqn.20...00k/tpg1/acls> ls o- acls ....................................................................... [ACLs: 0]
/iscsi/iqn.20...00k/tpg1/acls> create iqn.2026-05.xyz.716697.pve-vt:12700k Created Node ACL for iqn.2026-05.xyz.716697.pve-vt:12700k
/iscsi/iqn.20...k-12700k/tpg1> cd acls/iqn.2026-05.xyz.716697.pve-vt:12700k/ /iscsi/iqn.20...pve-vt:12700k> set attribute authentication=1 Parameter authentication is now '1'.
/iscsi/iqn.20...pve-vt:12700k> set auth userid=myuser Parameter userid is now 'myuser'. /iscsi/iqn.20...pve-vt:12700k> set auth password=mypassword Parameter password is now 'mypassword'. /iscsi/iqn.20...pve-vt:12700k> set auth mutual_userid=targetuser Parameter mutual_userid is now 'targetuser'. /iscsi/iqn.20...pve-vt:12700k> set auth mutual_password=targetpassword Parameter mutual_password is now 'targetpassword'.
The mutual_password auth parameter.
The mutual_userid auth parameter.
The password auth parameter.
The userid auth parameter.
/iscsi/iqn.20...pve-vt:12700k> cd ../../luns/ /iscsi/iqn.20...00k/tpg1/luns> create /backstores/block/debian-disk-12700k Created LUN 0. Created LUN 0->0 mapping in node ACL iqn.2026-05.xyz.716697.pve-vt:12700k
/iscsi/iqn.20...00k/tpg1/luns> cd ../portals/ /iscsi/iqn.20.../tpg1/portals> ls o- portals ................................................................. [Portals: 1] o- 0.0.0.0:3260 .................................................................. [OK]
/iscsi/iqn.20.../tpg1/portals> cd / /> saveconfig
Configuration saved to /etc/rtslib-fb-target/saveconfig.json /> cd iscsi/iqn.2026-05.xyz.716697.pve-vt:tank-debian-disk-12700k/ /iscsi/iqn.20...n-disk-12700k> ls o- iqn.2026-05.xyz.716697.pve-vt:tank-debian-disk-12700k ...................... [TPGs: 1] o- tpg1 ................................................... [no-gen-acls, auth per-acl] o- acls ................................................................... [ACLs: 1] | o- iqn.2026-05.xyz.716697.pve-vt:12700k ............. [mutual auth, Mapped LUNs: 1] | o- mapped_lun0 ............................. [lun0 block/debian-disk-12700k (rw)] o- luns ................................................................... [LUNs: 1] | o- lun0 [block/debian-disk-12700k (/dev/zvol/tank/debian-disk-12700k) (default_tg_pt_gp)] o- portals ............................................................. [Portals: 1] o- 0.0.0.0:3260 .............................................................. [OK] /iscsi/iqn.20...n-disk-12700k> cd / /> exit Global pref auto_save_on_exit=true Last 10 configs saved in /etc/rtslib-fb-target/backup/. Configuration saved to /etc/rtslib-fb-target/saveconfig.json root@pve-vt:~#
The screenshots are taken from a Debian install on a VM. This method works just as fine on my PC with the 12700k. Documenting screenshots would have been a pain with the installer running on my PC.
netboot menu along with our custom menu item. Select Custom URL Menu 
debian13-iscsi.ipxe 








Continue with no disk drive. You may not see this screen if the installer finds some disks on the system. We don’t have any. 
Configure iSCSI volumes. 

/etc/iscsi/initiatorname.iscsi with the InitiatorName that we configured for the iSCSI ACLs using targetcli. For me it is InitiatorName=iqn.2026-05.xyz.716697.pve-vt:12700k 
iscsid processes. Restart iscsid, it will be started as a background daemon process by default. Confirm by ps | grep iscsi to see the iscsid processes running.ps | grep iscsi
kill -9 ISCSI_PIDs
iscsid



targetcli. Enter Initiator username. 



targetcli. 
/var/log/syslog. In this example, it failed due to bad auth. Recheck the iSCSI configuration & reconfigure the targets. The installer will take you back to Step 17.


LIO-ORG 






Netboot.xyz 




