Creating LACP/LAG on XCP-ng (and Aruba switch)

I have been trying to learn about this but afraid I’m going to FUBAR things to where I no longer have access and need someone to give me the nudge or restrain me from making a catastrophic mistake. This is for a hobby, NOT a career; I’m actually a nurse by profession.

I am running XCP-ng on a Dell R710. It has four gigabit NICs. I also have an Aruba S3500-24P network switch; actually three of these switches, interconnected by 10Gbps. I was thinking that I could potentially allow more bandwidth to/from this machine by combining the four NICs. If I go to my server in the XCP-ng Center and under NIC, I can create a bond. I’m presented with four options.

  • Active-Active
  • Active-Passive
  • LACP with load balancing based on IP and port of source and destination
  • LACP with load balancing based on source MAC address

The first two options do not require any modification of the switch while the last two do require setting up LACP to be set up on the switch ports. Is there any advantage of going with LACP over Active-active or Active-passive? I don’t believe there will be any need for any single connection to need more than 1 gigabit but between the different VMs, the possibility of exceeding 1 gigabit is possible. My worry of setting up LACP on the switch is whether I will somehow lock myself out of the system and if there is a particular order that needs to be taken to prevent such lockout.

Using LACP allows the system (system being the combination of the server and switch) to handle if one of the links gets disconnected. Also with the active-active and active-passive, bandwidth will only be improved for traffic leaving the server, not anything entering it, because the switch will only choose one port at a time to send its traffic to.

Which LACP you choose depends on what the switch’s LACP is able to do, they don’t have to match but its best practice to do so.

Just to add a little to what @brwainer said, active-active will allow the virtual host (XCP-ng) to load share outbound traffic from the VMs across the two connected links. Basically it will pin a VM to one link. Should the link fail, it will swing it to the other operational link. Same rule applies that you will only get 1G. Now with VMs on the same network/VLAN talking to each other, you will be able to get faster speeds since the traffic never hits the wire. This is because the traffic is switched on the local box.

I just updated a different thread about how LACP works so if you want to know more, check out this thread: Routing multiple network interfaces

In your case I would configure the switch ports as trunks and use active-active on the virtual host. Make sure you tag your VMs with the correct VLANs as well.

As far as getting locked out, I would make sure you have console access to the switch. Then you’ll never have to depend on IP working properly to access it.