10 gb NIC recomendations for SAN for VMs

I’m currently running 2 nodes of proxmox and also have a physical FreeNAS server. I’m looking into using SAN for the containers’ storage so that I can live migrate VMs when I have to do maintenance. I think GB networking would be okay for linux servers but the few heavier instances that I run would need faster.

Assuming that my servers are close in proximity I’m thinking 10 GB and cat6e should do the trick. I’m just wondering if anyone has recommendations for decent cards that don’t break the bank. I could get away with a single port on each card and direct connection between the two, or I could VLAN dual connections for redundancy.

Thanks for any suggestions,
kkO

While they might not break the bank, they’re not the cheapest, but Intel X550-t2 cards have been good for me. No issues whatsoever in ESXi or Windows server, but I can’t say what they’re like for other operating systems. Having said that, over the years I’ve always found Intel cards to be solid, reliable and well supported across different OS.

1 Like

I guess in case it matters we’d be talking about Debian linux on one side and FreeBSD on the other side – I know that generally speaking Intel chipsets are good in *nix. If a $250+ card is my only option then I’ll start saving up.

So I’ve never tried it but I’ve heard that mellanox connect x2 for Linux and chelsio for Freenas

Very similar to another post that @xMAXIMUSx and I posted in from a couple of days ago.

For a home lab, 1gb is going to be fine. Depending on how many nics you have on your SAN and Hosts you could LACP some ports together on your SAN and hosts to increase bandwidth.

Depending on exactly how you did it you could probably get away with 1 dual port and 2 single port cards but that leaves you with no redundancy. 4 off dual port cards would give you a bit more failover but you are probably looking at $1000 (£800) so it kinda depends on your budget!

This is actually for a small office. The main reason for desired bandwidth is so I can live migrate a running test environment when needed. Maybe dual or more 1GB NICs directly connected between the iSCSI NAS and the VM hypervisor using MPIO would be an option? I’ve tested 1GB iSCSI and I’m looking for better performance in the handoff.

I would definitely give LACP / MPIO (it gives you redundancy if nothing else) and if that doesn’t go quick enough then 4 off dual port 10gb cards is going to be your way forward.

Depends on you budget, if you have $1000 available to spend then get the 10gb kit. If not or if it means taking away from another budget then try LACP / MPIO now and try to build the $1000 in for the future (by which point it will be less expensive).

@kkoh , how do you plan making a SAN ?
Does this mean that you will have iSCSi high availability storage ?
Are you using XCP-NG for Virtualization or proxmox ?
Can you explain please more for your setup ? Thank you.

Proxmox is the hypervisor and iSCSI via FreeNAS (or TrueNAS once I convert).

1 Like