NFS / iSCSI - XCP Benchmarking

Hi Guys,

I’m looking for some general advice about whether I should move some of my VM’s off the local XCP storage to a shared NFS/iSCSi storage model on Synology / TrueNAS Scale. This question is more about the hardware side of things.

I have a site with around 80 users using 3 servers configured in an XCP pool. This hosts around 7 VM’s with most being Server 2019. The hardware is a mixture of SSD and spinning rust. The network interfaces are all 1gb ethernet connections between server/switch/NAS. I have a number of small Synology NAS’s consisting of 218+, 220+, 218j and 215j on RAID1 WD Red drives. I have one TrueNAS Scale server, again, with a 1gb connection and spining rust.

Probably pretty standard inventory for a smaller SME organisation. I don’t have the budget for really high end servers or sfp/10gb connections/switches. However, I would still like to take advantage of the shared storage model and the admin/HA benefits it gives you in XCP.

My concern is that 1gb topology with lower end Synology’s isn’t going to be quick enough from an end user prespective. Our busiest VM is a SQL server that probably services around 35 users concurrently at any given time. This is on RAID1 SSD local XCP storage. It currently works fine with no speed issues or problems.

I appreciate without benchmarks and exact through put figures it’s difficult to determine, however, I’m looking for more of a gut feeling or perhaps someone that has prior experience to setting up shared storage on pretty basic topology. I’m aware of the performace differences between iSCSI and NFS and thats something I would decide later. This is really just a high end overview as whether I should commit some time towards this task as there will be a bit of work invovled in prepping everything and I’m concerned that maybe it’s unrealistic to expect the current set up to cope with this.

So just looking for a general consensus of whether, it’s worth giving it a go, or, I’m I flogging a dead horse.

Comments welcome

IM

I would leave the VM on the local XCP storage, always going to be quicker then network

Use the NFS storarge on Synology or Truenas as backup targets for your VM

Thanks for the reply Paul. Yeah, I currently backup some stuff on-site, but the majority goes off-site to an S3 for DR purposes.

Appreciate the response.

With only 1gbps and many VMs and many users, I’d probably keep them on local storage. I know it is a pain when it comes time to reboot an XCP-NG server, and if a host fails none of the other hosts can bring the VMs back up. I wish the XOstor was available open source, that would at least give you some recovery options when I host fails.

I did run my lab on 1gbps and Truenas for a while, but that was only me using it. During server updates things would drag down a bit since they were all trying to hit the same storage connection.

If you have extra 1gbps connections on each host, you could build a storage network and let the user data go out on your current network. That might be OK to move into a NAS flow, but it might kind of depend on what the users are doing with those Windows servers and how much data actually gets moved over the network.

Thanks Greg for the feedback. I’ll just keep things as is, or until I have the budget for better kit/connections.

Appreciate it