iSCSI, 10GbE Bond and Synology

(Cross-posting this from the Proxmox forum - no answers… :frowning_face:)

Hi Folks - I will start by saying that this is more of a “want” than a “need” as this is my home lab… [image] but has become somewhat of a quest…

I’ve recently switched from ESXi 7 to Proxmox 7.2 partly due to VMware being purchased by Broadcom and have doubts about the “free” version of ESXi continuing. Anyway, with ESXi I had my HP DL380 Gen9 (another reason is hardware support “disappearing” from ESXi patches over time) directly connected to my Synology RS1221+ using iSCSI with redundant connections. The DL380 has the HP FlexFabric 10Gb 2-port 554FLR-SFP (Emulex Corporation OneConnect 10Gb NIC) and the RS1221+ has the Synology E10G21-F2 10GbE dual SFP+ Port adaptor. Dual DAC cables directly connect them (no switch). The MTU was set at 9000 on both ends (including both physical NICs and the bond in Proxmox).

Using the 10GbE ports, I set up the Synology using Adaptive Load Balancing and Proxmox using balance-rr . I had no issues setting up the iSCSI connection and creating the LVM volume. I then created a test VM (Ubuntu 20.04.4 LTS) with no issues. As part of my testing (education?) I then backed up that VM, destroyed it and started a restore. During the restore I started getting storage timeouts. Pings between Proxmox and the NAS seemed to be “skipping” pings as well; e.g.:

Code:

64 bytes from 172.16.1.1: icmp_seq=1 ttl=64 time=0.148 ms
64 bytes from 172.16.1.1: icmp_seq=3 ttl=64 time=0.131 ms
64 bytes from 172.16.1.1: icmp_seq=5 ttl=64 time=0.140 ms

I went back to using a single 10GbE NIC on Proxmox and the NAS and everything is fine.

I’m sure I’m missing something simple here but Google University and YouTube haven’t been any help here.

Suggestions anyone?

Thanks!

I wonder if this might be a driver issue if your network controller can actually handle the feature in ifconfig. Another thing to note is LAGG in round robin is not a good idea for an iscsi network and might be part or the source of your entire issue. If you have the option to do load balancing then that is the way to go.

I know proxmox is Debian based but this man page should be the same

https://manpages.ubuntu.com/manpages/bionic/man4/if_lagg.4freebsd.html

@xMAXIMUSx I think you may be right in a way. After reviewing the Proxmox iSCSI Multipath page it seems that I need to use multipath, not bonding.

My first thought is that I will need to take each port (on both ends) and set them up with an IP each (e.g., 172.16.100.1 and 176.16.100.2 on the Synology and 172.16.100.11 and 172.16.100.12 on the Proxmox side) and then set up multipath.

Does this seem like a start?