This might be completely wrong, and I know “by right” a management interface should be on it’s own physical interface on it’s own VLAN, but what about “servers” with only 1 physical LAN interface?
I have a NUC box with 1 x 2.5GbE LAN.
It’s running XCP-ng 8.3
I have it hooked up to Port 18 on my UniFi switch with native VLAN 101 (Test 1 network)
Port 18 is a trunk port (allow all)
My goal is to get XCP-ng host to use VLAN 240 (Servers network) for it’s management interface but use native (or tagged) VLAN for any of the guest VMs.
This way I can interact with host over VLAN 240 and any VMs over VLAN 101, 102, 103, etc.
When I try set “VLAN (Optional)” to 240 as per the picture, it doesn’t get an IP address at all. If I switch Port 18 to be native VLAN 240 it of course gets an IP as expected.
I’m welcome to suggestions on how bad this all sounds, like a guest VM “claiming” to be another VLAN and accessing something it shouldn’t.
And if I really have to I can use the built in WiFI NIC for the management interface, but I’d like a solution none the less.
Ya, I guess I can do that. I’m new to XCP-ng so I wanted to ensure any VMs only get to the Test VLAN by default.
Maybe I’m looking at this the wrong way - like is there a secure way to have the physical NIC native VLAN 240 and hard setting the VLAN(s) that a VM can attach to; so it won’t be able to try connect to any random VLAN? For example if I spin up a VM on the IoT VLAN and it gets comprimised, then I want it to stay there and not try jump onto other VLANs.
I think you’re over think it. Even if you did put the 240 network in a different physical NIC, it is still an option to use that network on any VM. I’m not sure what you mean by “hard setting” a VLAN. That is what you’re supposed to do when standing up a new VM is to set which network (VLAN) you want your VM a to run on.
Ya, I think I’m overthinking it too. I’m used to vCloud Director on IaaS deployments, and multiple physical single purpose servers hooked up to switches with native VLAN only. I’m not used of setting up full hypervisors like XCP-ng on bare metal and how the host and guest VMs connect to each VLAN. I’ve no issue with the VLANs, trunks, access ports, etc., from the switch/firewall side of things, it’s just hypervisor level.
When I mentioned “hard setting” I mean restricting, but sounds like this is up to XCP-ng as I will be setting this on the VM via it. I just want to make sure that when I set “My Test VM” to VLAN 101 that it can’t get to VLAN 240 via some tricks I’m not aware of.
No, I don’t mean inter-VLAN communications, I mean VLAN hopping / exploiting. My VLANs are already configured into secure and unsecure groups, and one way access with return traffic. I have all UniFi gear here.
Even if I restrict that physical port on the switch to only the VLANs that I explicitly want, say VLAN 101, 102, 103, the guest VMs still have access to my 240 Server VLAN potentially as that will also be allowed for the management interface.
Again, I think you are over thinking this. If you are taking the mitigation steps to prevent VALN hopping then this isn’t an issue (ie: not using vlan1).
Well, thinking on it, why is “VLAN (Optional)” even there? I set it to 240 and it doesn’t connect to VLAN 240 - whether the port is set to native 240 or trunked.
If you set the management VLAN to “empty” on XCP-ng then you need to connect the XCP-ng interface to a switch port where the management VLAN 240 is “native”.
In the other case you have the management VLAN (e.g. 240) as one of the of VLANs assigned to the switch port (and native turned off) connected to the Xcp-ng interface. In that case you need to set the management VLAN to 240 on XCP-ng.
in short:
(1) management VLAN traffic is native, i.e. untagged
switch port: native=240, VLANs=all VLANs you use for the VMs
xcp-ng interface management VLAN: empty
(2) mangement VLAN traffic is tagged
switch port: native=off, VLANs=240 PLUS all VLANs you use for the VMs
Ya, how you describe (1) is how I have it running since, and it works fine that way. When I tried (2), it would not get an IP at all, with VLAN 240 tagged, but I didn’t have natve=off because I had native set as another VLAN at the time. Maybe if native was off it might have worked. I’ll try it some other time.
I don’t think that this would make a difference regarding DHCP. Did you ensure that the DHCP server service is running on the VLAN 240 network and handing out an IP?