Most of my clients are small/medium size and run hyperv with local storage and veeam. I’m considering switching to xcp-ng for my largest client and their size really does justify separating out the storage from the host. They have some large storage volumes, so I was thinking about something like truenas to serve the bulk of the data with iscsi or nfs for virtual machine storage. There would be a DR site with ZFS replication at least for the main share, and life replication failover for xcp-ng. Stability / robustness is paramount, what would you say is the ideal / recommended storage setup for xcp-ng? Do you have a standard you try for with your clients?
Edit: Just to be clear I’m not specifically asking about TrueNAS. I’m open to anything. Thanks
Also you say lots of large files, are these large files part of a server, or are they user data that can be held on a mapped drive?
The largest files on any of my servers are the WIM images on my WDS server, almost everything else is user data on mapped drives. NFS will be just fine for my VMs when I get them moved to XCP-NG.
If I had a large web application and the data to make it work was expansive, then I’d have to think a bit harder.
If money is available, the XOSan hyper converged storage for the VM might be an option. Then you just have user data and maybe application data on a NAS. I don’t think XOSan v2 is out yet, so it may not be a real choice yet.
As far as Truenas Core, it is well tested and worthwhile, not sure about Truenas Scale yet as I haven’t worked with it (yet). A Truenas and AWS for backup would be a good choice, and second Truenas for a “near live” backup would also be good. For companies that have multiple physical locations, a few Truenas that all replicate to each other seems like a good cheap way of keeping data backed up both on premises and “off” premises, all you need is a connection between the buildings. Think a college with multiple building on campus, they are all connected, but it would be a really huge event if all the buidlings were destroyed at the same time. A Truenas in the basement of each and a sync task keeping them all the same would be a lot cheaper than restoring a large amount of data from AWS, and most likely at the speed of 10+gbps not the 1 or 2 gbps that our ISP connections can provide.
We get a lot of pushback when it comes time to restore files, pay by the gigabyte and all. And that’s considering the petabytes of storage that Microsoft apparently gives us as part of our contract, we still have stuff backed up to AWS and they always freak out if it is more than a few megabytes worth of files that need to be restored.
We have 6 buildings on the main part of campus, and 1 more at an old edge, and one more down the road and across the street. Physical separation would be good for anything short of massive earthquake, meteor crash, or giant bomb. All three things are events that have greater importance than data integrity at the moment they are happening. A giant gaspipe fire could not take out all three positions because of the distances. And we would have 25gbps (or more) between all these buildings, so large datasets could be moved with reasonable speed compared to internet at 1gbps.
Thanks for the replies, sorry for my slow reply, I’ve been out of the office. I think I will get a pair of truenas systems for the bulk of the storage which can be just shared out directly. Then a pair of servers with ordinary hw raid controllers for the hosts, the vm’s can live on that as I don’t think any of them will be very large separated from the shared data, one will replicate to the other offsite for failover. They have 40ish TB of mostly jpeg and other image data.