TrueNAS SCALE Performance

I watched the performance comparisons with great interest. Unfortunately, I cannot repeat the test myself because any version of FreeBSD (including TrueNAS Core) will reliably kernel panic on boot on this HP Microserver Gen10. Linux driver support should be a lot better for non-ixSystems hardware (which has been selected to support FreeBSD, a somewhat painful process I’m sure), but your test with the Supermicro shows that that can’t be the (whole) reason.

Both the NFS and SMB daemons aren’t in kernel on SCALE.

Veritas recommends changing the vm.min_free_kbytes sysctl for the user space Ganesha NFS (see Recommended tuning for NFS-Ganesha version 3 and version 4). I’m wondering whether that’s making any difference?

Another thing is that the default ZFS zfs_arc_max is only 1/2 the system memory. Is that the same on Core? I’m increasing it so that there’s only a few GB free (unused) on my system.

Last, the k3s-server is a CPU hog on my system. Wondering whether that’s a universal issue.

I’m also wondering whether there is any impact running the benchmark from within a VM (probably not)?

Btw., ixSystems really does look at bug reports and fixes them. I’m up to 21 and I’m hoping that it’s making a difference.

I’m beginning to suspect it’s the OpenZFS file system version. iperf3 shows almost perfect numbers, but running fio locally on an ssd pool gives pretty poor results.
Using one of the ars recommended tests, I get much lower numbers than when I had Core virtualized under esxi (same hardware).

fio --name=random-write --ioengine=aio --rw=randwrite --bs=64k --size=256m --numjobs=16 --iodepth=16 --runtime=60 --time_based --end_fsync=1

(Ars uses posixaio, but phoronix does not)