XCP-NG in Pool Config - Question about NIC's

Hi Guys,

"Just when I thought I was getting to grips with XCP :frowning: "

I have 3 Dell servers all the same with 4 NIC’s onboard. Historically, these servers were separate XCP hosts, but I felt it might be better to join them all into a pool as hardware the same and I could take advantage of some of the benefits you get with pools. I did this a while back, and so far, no issue.

Since then I’ve never needed to create a new VM, and when I did, I discovered in the “Interface” section that it just listed the 4 master pool NIC’s (we’ll I think it used Masters NIC labels as a default). But when in “Disk” section it shows all the hosts & storage. My query is I expected to see all the host NIC’s in “Interface” section. So, how do I select NIC3 on the host which the VM is living on (ie the storage host I’m using). I assumed you select NIC3 and because the storage is on a particular host that would automatcailly use NIC3 on that host. :confused: Hope that makes sense.

Is my assumption correct?

Appreciate it !!


As long as all the serves have THE EXACT SAME network setup and all of them are plugged in to the same switch then it should not matter what host they are on as all the network options will be the same for all hosts in a resource pool.

Thanks Tom !! I appreciate the response.

I’ll go a little deeper… Are you using local disks or network shares to hold the VMs? It seems like local disks.

The following is my experience with VMs on a NAS:

If local disks, then you probably need to set the “host affinity” for the VM you are creating so that it only tries to start on the host where it has storage. I don’t think (could be wrong) that a VM can start on HOST A when the VM is physically stored on HOST B. It might be able to automatically migrate, but that will take additional time. (please correct me if I’m wrong)

Additionally, I’m not sure what happens to a VM stored locally if you try to do a pool update/upgrade. I think when I had one VM on a local storage, the update failed because it had to shut the VM down, and the hypervisor didn’t want to do that. I guess this is all stuff I need to try on my lab the next time I power it up (longer story), especially because I know I have an update that will go when I turn it back on.

If I’m wrong on all this, someone please explain, it would save me some testing and give me the knowledge I might need going forward. In my opinion, once you move to a pool, you need to go to a NAS to hold the VMs, or you must configure a unified storage like XOstor or the older HA Lizard. There is probably a way to do this in an open source DIY way, but I’m not knowledgeable enough to do this. If someone wants to make a guide on how to DIY local storage into a unified central store, I’d really appreciate that and roll it out in my system. Gluster or Ceph seem to be needed and I haven’t had time to look into this (and probably never will have the time). It’s probably a big project.

Thanks for the compreshensive answer.