Testing raw ZFS pool performance?

For the past year I’ve had a single Z2 vdev with 4 drives in my TrueNAS server. I’m now using it for VM storage, and after reading Tom’s great post on pool design I realised that the reason my VM performance was poor was becuase RAID Z vdevs have IOPS of a single drive - even though the streaming speeds look fine when tested (they were also mistached drives too, which doesn’t help).

I bought 10 identical 2TB SAS drives (used) and some 10Gbe cards to play around with differnet pool layouts to find the best for me. I was also planning to write up my findings since there doesn’t seem to be a ton of data on what to expect from an el cheapo setup like mine.

In the write-ups Tom linked, it’s pretty evident that pool performace scales farily predictably as you add more drives/vdevs in a given configuration, but after a whole day of testing I am left scratching my head.

The first thing I did was throw all 10 drives in a stripe to see how fast it would go. It would also show if my 10G cards were working, but read speeds leveled out at about 500-600mb/s. This was using various disk benchmark tools on a windows VM on XCP-NG with the virtual disk on the array over iSCSI.

Well, ok… I was expecting to saturate my 10G cards but that’s still plenty fast enough for me. I though it might be hitting the bandwidth limit on my DAS, even though that should have been 1200mb/s total (but 600 is suspiciously half so ??).

I then ran iozone on the pool from the truenas CLI and found that it also capped off at around 500-600mb/s.

Then I tested my old Z2 pool and found that it had suspiciously similar performance numbers. Speeds that pool should not have been capable of. What is going on?

I tried NFS/iSCSI, compression on and off, differnt pool configs, but they all gave VERY similar results for every single test. Finally I tried a single disk pool. That still gave insane read speeds of 600. It took me longer than i’d like to admit to realise that my tests were not r/w to the disks and were in fact not similar, but the same.

I suspected that this was a weird memory cache thing, but if it was shouldn’t it be a lot faster than 600mb/s? I was also using 16Gb tests so they wouldn’t be cached in memory, and even confrmed this as the empty memory cache did not fill during the tests. To throw me off even more these insane speeds on the one drive pool were shown in toal disk I/O in netdata?? Where tf is it writing to? Bear in mind that my DAS is only SAS 3, so speeds should not exceed 300 for a single drive.

Basically, how can I test raw pool performance without it being cached anywhere?

I assume it’s my disk benchmark programs being too predictable for ZFS, but I’d be interested to know where this data is being written and read from if anyone knows.