Tell me if this is stupid or not… Suppose I have 3 XCP-NG host servers with 3 each empty drive bays. What I’d like to have is all three chassis have the same info on the drives for HA.
It seems that Truenas Scale can be set up in a VM, and I certainly have plenty of space on the XCP-NG system drive to hold the Truenas VM.
Is this a workable solution?
My concern is that you can not bring up a VM if the storage VM is not running. So to get Truenas running they would need to be tied to the system boot drive “extra” space. The Truenas VMs would come up as soon as the system is ready, then the storage comes up, and then the rest of the VMs can come up (on a timer???).
#1 Workable solution?
#3 Really STUPID idea and just pay for XOSan?
Yes they have XOSan out of my price range, I’m far too small to justify that kind of expense each year, let alone be able to get that much in my budget (yes I priced it through their website).
I’m pricing out a meager little storage system from ixSystems and thought I might want to check on the above. Also open to other ideas that can get me hyperconverged store with identical information on the three sets of drives so that HA can bring up any host that is still working and fight over which host has which VMs when the system is up and ready for load balancing. For now I’m not going to run HA and just use an NFS share, but I want to eliminate the extra server if possible.
Don’t think it’s workable and if it is workable then it would be a headache. Is the goal to have a HA storage server? If so you could use a TrueNAS M30 with dual motherboards or get two TrueNAS servers and use XO continuous replication:
I read on 2 part article on Serve the Home where they did a similar thing with Proxmox. They had 3 Proxmox hosts with internal storage and used a combination of ZFS via Proxmox GUI and Gluster FS to create a storage cluster. Proxmox also has Ceph, but I haven’t looked into how that works.
Thanks to both of you. Yes I think it will be a headache, especially since it doesn’t sound like people have done things this way before (maybe for good reason). I may fool with it if I ever get time.
As far as gluster and ceph, I thought about that but have far too much to learn. In theory you could make the distributed files system run at OS level on XCP-NG, I’ve heard people say that is essentially what XOSan is doing. HA Lizard was also doing this when it was active.
In Proxmox you can setup a CEPH cluster with a few clicks in the WebUI. I did it once by using three Proxmox VMs running in Proxmox and it worked, including live migration, failover etc… I can’t say anything about how well it works in a real world scenario, and I’m by far no expert on it, but I’m pretty sure, there is no easier way out there to get a working CEPH cluster, literally within minutes…
OK, courtesy of XCP-NG updates that I installed today I found the biggest reason not to do what I was trying to do…
Rolling Pool Updates will fail if you have a VM on local storage, because it can not be migrated. I shut down the single Truenas in local storage and then the pool update went as smoothly as ever (these rolling updates are such a nice feature).
I think what I may do from here is make a bunch of local storage and just send VM backups to local. That way if the NAS crashes, I can still bring the backed up VMs back to life without too much drama. Will be cheaper than a second NAS, though that might change in the future, would be nice to just have things pop up live from a replicated storage repository.