I’m trying to tune the NFS shares that I’m using with XCP-NG to get a little more speed out of them. About the only option I can see is the record size option, which is currently 128k. I’ve been reading that this is a maximum, not a “block of bits”, or at least if you have compression turned on (which I do to LZ4).
Any downside to changing this to 1M, I’m reading several recommendations to increase this up for use with a VM and no real downside mentioned.
Any other tuning parameters that I should be looking at?
My biggest complaint is migrating VMs from one Truenas to another Truenas. Average speeds per VM are 70MiB, even running three of them at once I can only peak at 200MiB for short periods of time. Everything is on 10gbps and using spinning drives, main Truenas is 8 HGST drives in RaidZ.
If suggestions are extreme enough that I need to burn these Truenas to the ground and start over, I can migrate my VMs off to the one I’m not working on and just destroy the server and start over. Running the most current version of CORE for production, but seeing the same things with the most current version of Scale on my lab system.
You’re correct about the configured record size being the maximum.
I can’t give a good answer as to what the impact of switching to a 1M record size would do. My understanding is that larger blocks can cause a penalty when doing partial record updates, since the entire block needs to be read, modified and written.
Why don’t you create and configure another NFS share with a 1M record size, then storage migrate a suitable VM between them with performance testing/monitoring on each target.
An in regards to the penalty I mentioned earlier, you can try testing with smaller records as well, like 32K or 64K (I wouldn’t go smaller than that if you’re using compression)
Going to 1M is not likely to improve performance of a VM on NFS because it’s going to mostly be smaller writes. While you can change the block size on a dataset with existing data once you so move the VM on and off that SR so it will be rewritten to the new size change.
Also there is some minor performance & effeciency gains to be had lining up block size and how they are distributed across VDEV’s
Yeah, I didn’t really see much change from the 1m record size. I decided to burn my storage down and start from scratch, partly to document the process, and partly to see what I could get from a fresh install. This storage originally went back to Core 12, then to Scale as an upgrade. I’m going to toss Electric Eel on it this time and see if any of the improvements in XCP-NG 8.3 and Eel will come together for faster performance. If it seems happy, I’ll do the same to my production system in preparation for the XCP-NG 8.3 release or the next long term release (whatever that might be).
Following up, I took this over to the XCP-NG forum because it started to feel like an XCP issue. Lots of testing and I’ve been able to change the NFS performance a bit with benchmarks, but not sure this will have any real world impact. The VM Disk migration is very slightly shorter which is the part I really wanted to change. And by slight I’m seeing about 22 minutes now vs. 24-25 minutes before.
If you are moving data in big “blocks”, tuning the NFS size is definitely worth doing, the larger 'blocks" no longer drop off as fast as they did before tuning.
Something to consider for certain use cases. And I don’t think setting the variables down to 16KB or 32KB will have the kind of impact that we would want, but I may try it to see what happens in the overall scheme of things, maybe set things down to 16KB or 32KB and see if migration changes.