Adding Multipathing (MPIO) to existing production XCP-NG pool

Currently we have an “in-house” built HA ZFS setup that uses HDD and we’re looking to migrate everything to a few new all-flash 3PAR arrays.

The current setup uses dual 10G links bonded (LACP) but this is not an option with the 3PAR 8450 units. Instead, each of the four separate 10G links gets it’s own IP and thus MPIO is the only redundancy option.

Our XCP-ng pool is v8.1 and we’d like to add in this new SR with multipathing but at the reading we’ve done thus far seems to indicate that a host has to be put into maintenance mode before turning this feature one, but that was a Citrix doc and not XCP-ng and the documentation from XCP-ng doesn’t really cover the topic

Thanks in advance!

Would you able to swing the VMs over to another host and test?

I definitely have enough room to vacate an entire host. But according to XCP-ng docs, I thought I had to add it to all hosts at once?

Hmmm… I’m thinking this is a question that needs to be asked on the XCP-NG forums. But it seems like you should be able to move everything off the master, make the changes, then move a bunch off stuff of the next host and repeat. Much like a rolling update but in a manual way.

Keep in mind, the above is a guess and if my storage was different, I might test it on my lab. If I could get you remote into my lab, I’d let you take a look and see what we could do, but it’s not really possible right now, I’m told no more opening of ports!

Each host is going to have it’s own connection to the storage so I don’t see why you can’t do them one at a time. I’ve done this plenty of times in VMware.

Thanks Greg, fortunately we do pay for XCP-ng Enterprise licensing and support but sometimes things have to go up the chain to someone more senior level to get answered and it takes time.

Once I know the answer to this question I’ll come back and post it here too in case the topic ever comes up again