FreeNAS replication bandwith requirements

All,
Wanting to setup two FreeNAS systems, one as primary and one as backup with replication between the two to keep them in sync. They are sitting in two different racks in the same room and was wondering what are the network requriements to do this. Using the FreeNAS system for iSCSI storage of virtual machines. I figured it would be based on what the VM’s are doing and how active they are. Would a 1GbE line between be sufficient or should I go with 10GbE just to be sure?
Eric

I would use 1G and if it doesn’t keep up than spend the money on 10G.

1 Like

Also want to consider future proofing. If you plan to scale then 10Gb would be my choice.

What are your hypervisors connecting to the primary NAS with 1G or 10G? If they are using 1G and you are not having any issue then 1G will be plenty. If they are using 10G then it really depend on the data change rate, because once you get passed the initial seed you would ideally only be replicating the change blocks.

Also depends on how quickly you want the replication to happen.

If it starts to hit the buffers on a 1gb connection at 09:00 when everyone starts their computers, syncs email and one drive etc but then it catches up at 10:30 when everyone has a coffee break then is that ok?

It might lag all day but catch up by 20:00 when everyone has gone home.

Also worth remembering that LACP is a thing and might help, if you have multiple NIC interfaces in the boxes then bond them together.

LACP likely won’t help because it is a single IP/MAC on either side.

Ive never been totally clear on this, I know one of the Linux native bond methods does XOR’s on MAC address or ip + port (or some combination of) but I thought LACP was supposed to operate at L1, it’s an 801 standard IIRC.

Be interested to read through some details on how lacp picks which leg to send frames down.

LACP can route traffic either by MAC or IP depending on the implementation. Once a MAC or IP selects a LACP leg it will not change unless all of it sessions reset. This is why LACP only really helps with bandwidth when there are enough systems requesting access which can then can be spread over multiple legs. MPIO was created specifically to get around this limitation. With MPIO each session generated by a single machine can use a different leg. However because of the hardware costs associated with MPIO it is usually only seen between a front end and backend with high network demand such as between Hypervisors and SAN storage via iSCSI. NFS started supporting Multipathing in NFS v4.1

Generally when you are setting up MPIO for iSCSC you are using two VLANs across two switches (similar to the way FC is done) that you can then pin to physical links so no LACP would be required. As for NFS, most controllers supported multiple VIPs for access to volumes so you could balance out storage traffic without using LACP as well. Another interesting protocol is SMBv3. It will allow you to setup multiple TCP sessions across multiple interfaces and truly load balance so if you are using two 1G links you can actually get 2G worth of throughput.

The use of multiple VLANS with MPIO as a best practice varies between storage vendors and hypervisors. Some go as far as requiring separate VLANS while others discourage it. SMBv3 multi-channeling can produce some very impressive results, with some nice features. While it has been around since about 2012 Samba didn’t even have experimental support for it until 2016, and most storage vendors are BSD or nix based and tend to use a Samba port for the SMB support. Which meant SMB multi-channel until the last year or two wasn’t really widely available via mainstream unless you were running a pure Microsoft solution.

1 Like