TrueNAS and XCP-ng Optimization Video

@LTS_Tom,

I have a homelab with 3 server XCP-ng pool with TrueNAS Core running the VM storage and general movie/doc storage. Your videos have been great at getting this setup along with vlans on my network.

One thing that I feel is missing is an independent TrueNAS Optimization video. With a base install of TrueNAS, I get mediocre performance at best. I feel that my 6-bay Synology with spinners is giving me similar performance. I haven’t seen any guides or video that discuss specific system bottleneck or things to look for that might be limiting system performance. I’ve asked questions in the TrueNAS forums and there has been some helpful advice, I feel that getting the most out of what you have is inline with some of the videos you put together.

TrueNAS System:
TrueNAS 12.0 U2.1
CPU:AMD Epyc 7282
Motherboard: ASrock Rack ROMED8-2T
Memory: 128G
Mellanox 40G SFP (ConnectX-3)
NVME-Pool (VMs)

  • 4x 6.4TB PCIE3 NVME U.2 Intel P4610 ( 2x vdev Mirror Stripe)
  • 1x Intel Optane M.2 380GB SLOG
    SSD-Pool
  • 8x 1.92TB SATA3 Intel D3-S4610 ( 4x vdev Mirror Stripe)
  • 1x Intel Optane M.2 380GB SLOG
    Storage-Pool
  • 6x 16TB SATA3 Seagate EXOS ( 6x Raidz2)
  • 1x Intel Optane M.2 380GB SLOG

FIO Test Used:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=read
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=write
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=read --ramp_time=4
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=write --ramp_time=4

1 Like

Looks quite elaborate indeed! We will find a resolution :grin:

NVME-pool results
I have similar results with Sync disabled and set to always.
XCP-ng SR NFS
Ubuntu VM
https://openbenchmarking.org/result/2104208-HA-NFS20GBEB07

I get better from a USB drive.

Edit: running a new test with sync disabled with the SLOG Removed and it’s looking better. I figured the SLOG would have a better affect because it’s an Optane drive. Obviously not a silver bullet but difference is crazy. I partitioned the Optane drive and ran and FIO test and it’s had great results, it’s a factory NIB drive.

Edit2: In XCP-ng, I migrated the VM over to the ssd-pool (removed the optane) and tested with sync always and sync disabled and got expected results. With Sync always, it’s 50MBs or so. With Sync Disabled, it was 980MB/s which saturated my 10Gbe bonded connection because it was only one stream. I then added the same optane that was in the nvme-pool and with sync on I’m still getting around 800MB/s writes. Because this is a homelab, the nvme u.2 drives were purchased on Ebay with low usage but it looks like there is a requirement for the firmware to be updated. After look oracle charges for everything and I don’t have a support contract to get the firmware so I think this is as far as I can go with testing.

This was an interesting experience to make sure the hypervisor was performing as expected by iperf from vm to vm. Same with the network, doing tons of iperf from all of the devices and ensuring the performance isn’t being cpu bound. Then doing test to and from the TrueNAS box. Inside there are the basics like sync, dedup, and atime but what about network and zfs tuneables, the record size, and other tweaks. This is where a video would come in really handy.

1 Like

Agreed. I’m not sure where to go from here, as I am having a similar problem on my liquid cooled rig at home. I.e. my video card has a serious driver issue. :sunglasses::rofl:

I wish u luck and one of our team members will be able to help you!!!

As always,
That Boi Lawrence