I am new to this forum and really appreciate all of the guidance. I was a very active Xen user 6 years ago, but have been using VMware now with my current clients. I have been watching the development of XCP-ng from the sidelines and would really like to get a lab set up and think about long term plans for migrating some of our clients. However, I cannot understand how to get around the 2TB limit on disk volumes. We have several clients with Windows VMs that have Multi TB disks (7, 10, 40 are not unusual). We even have some with clusters of a few hundred TB. How would we solve these in the Xen world? I am not sure what an NFS or iSCSI pass through would look like, or how that would work with some of the monitoring or auditing tools used for those VMs. Any suggestions?
The VM can not have a larger that 2TB disk so the general design is to use iSCSI from your storage server and present it to your Windows system. If there is a need to a higher bandwidth then you would also need to add multiple network interfaces to the Windows VM, one user facing to server the shares and the other on the storage network.
Hi Bill. I would argue that you or the client is incurring huge technical debt by using abnormally large guests drives. If you choose to use guest disks for file storage instead of storing them on a san/nas, then I would say the best way to accomplish such sizes is to create more 250GB or 500GB guest disks and then pool them together inside the guest OS using LVM if the guest is Linux or Dynamic expanding volumes if you’re in Windows. This way you can migrate storage with ease or have parts of the storage on different medium. Let’s say that you have to for some reason migrate to a different SAN, you can migrate chunks at a time, put 2TB on this storage pool, 5 there, or 7 there. Zero problems. If you have a single 40TB guest disk, how in the world are you going to move that without having to procure and have 40TB of storage free on some other unit? You want to make backup, restore, and migrations easier. That’s the point of hypervisors is to abstract the limitations of being tied to hardware. Using dynamic volumes is game changing.
I honestly accept to tell that 2TiB limit isn’t a great thing. It’s inherent of the VHD format. But we are working on it (see Understanding the storage stack in XCP-ng).
However, beyond that point, I concur with @David point of view: having very large VM disk remove the entire flexibility concept of a VM: even doing a backup of those 10TiB+ monsters will take ages.
I’ve learn that even VMware users were guided to use smaller disks and for very important volumes, to use a network share inside the VM. This way, you can keep very flexible VMs.
Anyway, it’s not a complete blocker for XCP-ng. You can use raw volumes for those VMs, and use far more than 2TiB limit (there’s no real limit in fact). However, you won’t be able to snapshot those VMs, not to backup those disks.
Thank you all for these suggestions. I can see how I would use these methods to address the challenge. I guess many of us just became “lazy” when we let the VMware guest disks grow with the data. I admit, SAN to SAN migrations are things we dread because they take so long.
These responses are great, and I am so glad to be in this forum.