Thinking about a 10Gb network? This might help!

For a while now I’ve suffered through painfully slow transfer speeds to and from my FreeNAS server. As low as 100MB/s read (I know it’s Gigabit but it’s still slow to transfer 100+ GBs of ISO’s), and 30MB/s writes.

A few weeks ago, I decided to take the plunge into 10Gb networking, in hopes of making my frequent file transfers to and from my NAS more bearable. If you’re also thinking about a 10Gb connection, this post is for you. Now I didn’t drop thousands of dollars trying to completely renovate my entire home network. For my use case, that would have been completely unnecessary. All I wanted was a super fast connection between my FreeNAS server, and my daily driver Linux desktop. I’m going to share the resources I used before and during my 10Gb installation in hopes of helping other do the same.

Prerequisite

Here are two excellent videos that helped me understand the requirements of a 10Gb network:

  1. A three part series that explains the core concepts very well
  2. Less conceptual, more practical walkthrough

My Hardware

Server:

FreeNAS-11.2-RELEASE-U1
Dell PowerEdge R710
Two Intel Xeon X5670’s
16GB DDR3 Server RAM
Five 2TB WD Green 7200k HDD’s
One 240GB ADATA Ultimate SSD

Desktop:

Pop!_OS 18.10 / Windows 10 (Dual Boot)
Gigabyte Z170XP-SLI-CF
16GB DDR4 RAM
Intel i5-6600K
AMD Radeon RX580
One Samsung 960 Evo M.2 NVMe SSD
One Mushkin M.2 NVMe SSD
Two SanDisk SATA SSD’s
One 2TB WD Green 7200k HDD

Preparation

I scoured Ebay looking for good deals on 10Gb NICs that fit my requirements:

  1. Firstly, I wanted a dual port NIC. Even though at the moment, I only really need 10Gb speeds between my NAS and desktop, I would like to eventually spring for a 10Gb capable switch, and additional NICs for my internet connection, and Xen server. Having two ports on each NIC gives me that flexibility.

  2. Second, I wanted SFP+ ports. Although there are 10Gb network cards with RJ45 ports, and I could have easily used the CAT-6 cables I already have to connect them without going through the trouble of switching to a foreign standard, they tend to be much more expensive that their SFP counterparts. Also, SFP+ ports have flexibility. If I so desire, I could eventually get fiber or RJ45 transceivers that fit into the SFP+ slots. But in the mean time, I could get a relatively cheap 3m SFP+ Direct Attach Cable (DAC) to bridge the gap between my desk and server rack with length to spare. IMPORTANT! Don’t confuse SFP+ with regular SFP ports. SFP is an older standard that only supports 1Gb speeds. To achieve the desired 10Gb network speed, you’ll need SFP plus. Many Ebay listings are unclear about this distinction, and even worse, some model names themselves (including the one I eventually purchased) say SFP, not SFP+. So to ensure the card you’re looking at has the correct 10Gb port, I recommend doing a quick Google search for the OEM’s product datasheet and verifying that it does indeed have SFP+ ports, and supports 10Gb speeds.

  3. And finally, I wanted it to be as cheap as possible

Network Hardware

Of course, the well known and reliable Intel network interface cards were a bit too pricey for me, even on Ebay. I’ve seen a number of videos on budget 10Gb networks use and recommend the Mellanox brand of NICs. At the time of my purchase however, I couldn’t find any reasonably priced cards of that variety. I did, however find a number of HP brand NICs on the cheap. I did what I could to verify compatibility of these HP interface cards with my hardware, but there was very little in the way of information about these cards even for enterprise gear, let alone for home use. I decided to give them a shot and found a good deal for a Set of Two HP NC523SFP 10Gb PCIe Dual Port Server Adapters for $28.99. I found plenty of 3m SFP+ cables for $10 or less, but for fear of incompatibility, I opted for a slightly more expensive HP 487655-B21 SFP+ 10Gb Direct Attach Cable 3m for $19.95 (which you’ll see is on the official HP hardware compatibility list in the links below). All in all, about 50 bucks for a 10Gb connection, with the option to easily expand in the future.

NIC: Two HP NC523SFP 10Gb PCIe Dual Port Server Adapters
Cable: 3m HP 487655-B21 SFP+ Direct Attach Cable I’m not recommending you buy the cable at this link, this is just to show you the specs since I couldn’t find an official HP datasheet

Drivers

When I received my hardware, I still held on to the hope - despite hearing otherwise - that they would be plug-and-play. GUESS WHAT? They weren’t… it required some tinkering to get them functional, but functional they now are!

  • FreeNAS Drivers:
    This particular HP card, is in fact a re-brand of the QLogic cLOM8214 Chipset. Luckily FreeBSD (the base of FreeNAS) supports this chipset through the qlxgb driver. The only modifications I made were to add
    Variable: if_qlxgb_load
    Value: YES
    under System->Tunables, AND to manually set the MTU of both interfaces (since I had dual port NICs) to 9000. This can either be done by adding the option mtu 9000 to the network interface options through the GUI, or via the command line with ifconfig <interface> mtu 9000. If you don’t perform this step, you’ll notice a barrage of hw_packet_error’s in the FreeNAS shell.

  • Linux Drivers:
    I Didn’t have to make any system modifications on Linux ( Linux FTW! ), however I did need to manually set the interface MTU to 9000 through the GUI or using the same
    ifconfig <interface> mtu 9000 command in the CLI.

  • Windows Drivers:
    Ironically, the biggest pain through this driver setup process came from Windows, the operating system that is usually the most plug-and-play of the bunch. I was able to find plenty of drivers through the official HP support website for Windows Server, but not for the home variety of Windows 10 ( yes I tried installing them anyways, the software performs an OS check during install and refuses to proceed on Windows 10 despite any compatibility settings I tried ). However, as I stated in the FreeNAS drivers section, this card is a re-brand of a QLogic chipset. So I had success using the QLogic Windows Server Drivers. You can’t simply run an executable, instead you must open the device driver update utility through Device Manager, and select the extracted driver folder by browsing your local files. After the wizard installs the driver, you should see “HP NC523SFP Dual Port Adapter” in the device manager instead of “Unknown Network Device” (NOTE: The dual port type that I have required I install the same driver separately for each of the two ports).

Network Setup

Since my use case was a direct connection between my desktop and FreeNAS server - a condition that meant bypassing my Cisco Gigabit Switch - I had to take the additional step of setting up a new subnet. This process is pretty simple, and if you’ve been able to follow the guide so far, it should be comparatively simple. On FreeNAS, add a new network interface. Select one of the ports from the new adapter ( due to the driver, this HP brand had a qlX naming scheme ) and set a static IP address in a different subnet from the rest of your network.

  • FreeNAS:

IP: 172.17.12.1
Subnet Mask: 24 ( ie. 255.255.255.0 )
Options: tso4 tso6 rxcsum txcsum rxcsum6 txcsum6 lro wol mtu 9000 ( The only necessary option is mtu 9000 )

  • Windows 10 and Linux:

IP: 172.17.12.10
Subnet Mask: 255.255.255.0
Gateway: 172.17.12.1

NOTE: There didn’t appear to be an option in the Windows 10 network adapter GUI to set MTU, so I used the following commands via CMD:

  • netsh interface ipv4 show subinterface to get the name of the relevant interface
  • netsh interface ipv4 set subinterface "<interface name>" mtu=9000 store=persistent to set and save the value

Conclusion

And Bob’s your uncle! 10Gb network complete! Sort of…

My adapters are being detected across all systems, and successfully communicating with each other - I’m now getting ~250MB/s (read) and ~250MB/s (write), a massive improvement over my previous situation to be sure. But there’s still room for improvement. Ideal 10Gb transfer speeds are around 1GB/s - 4 times greater than the speeds I’m seeing. It should be noted, that my quoted speeds are based upon my normal conditions. That means no RAM disks, and other simulated setups that some videos boast. My NAS consists of your average consumer HDD’s, and 1 SSD setup as a ZFS SLOG. With that hardware, I know I won’t ever achieve the ideal 1GB/s speed I mentioned. And if I never break the 300MB/s barrier with my current hardware, that’s okay. It’s plenty fast for my use case. Nonetheless, I’ll close with some thoughts I have to squeeze every last drop of performance out of my hardware.

Thoughts for additional testing:

  • Force SMBv4 (FreeNAS is supposed to negotiate the most current SMB version, but it requires additional testing)
  • Test transfer using other protocols (NFS, FTP, etc)
  • Add an SSD cache to my FreeNAS pool
  • Optimize FreeNAS / ZFS tunables

That about wraps up my guide. I hope this was helpful to those interested in setting up their 10Gb network, or for those who have had trouble with this particular brand of Network Interface Card.

Happy networking!

7 Likes

Very nice write up! Drivers can be a pain. That is the reason I’m still using ESXi and not xcp-ng. I hope your experience with WD Green drives is better than mine. I don’t remember exact numbers at this point, but I’d say I had over a 50% failure rate with those. But they may have fixed the FW in the past decade.

Excellent Write Up! Thanks for sharing.
The only thing, try HGST Hard Drives, failure rate is excellent. Check the Backblaze testing. They always give reports as to their experience with thousands of drives.

Thanks @mouseskowitz @pedracho! Everyone hatin’ on my WD Greens huh? :stuck_out_tongue_closed_eyes:
I got them for free, they’re actually a number of years old ( probably not helping my case against drive failure am I? ). I keep an eye on their SMART tests, and for now at least, they’re still ticking along.

I wish I could get a full set of enterprise drives, but they’re quite expensive.

if your not apposed to referbs there are these and i have had pretty good luck with them

3tb https://www.amazon.com/HGST-Ultrastar-HUA723030ALA640-Enterprise-Refurbished/dp/B079SMCB17

4tb https://www.amazon.com/HGST-HUS724040ALE640-0F14683-Enterprise-Refurbished/dp/B079RZHLDL

6tb https://www.amazon.com/gp/product/B07H13Z1S8

1 Like

Absolutely, I’m cool with referbs. Those prices are much more reasonable, thanks for the links! I’ve also had pretty good luck in the past with used and refurbished hardware. In fact, almost my entire home lab setup (3 servers, a 1kw UPS, 2 Cisco switches, Dell KVM, and the rack they’re in) I’ve gotten for free from other nerds that didn’t want it to go to waste :grin:

1 Like

I also buy refurbished hardware for my labs. They work just fine. I never had any problems luckily with any of the Refurbished equipment i bought.
Check if there are any college campus around your area. Most of them, sell their IT hardware every five years to replace them. Also, most of that Hardware has been barely used. I got lucky a couple of times, and took advantage of that opportunity.

Hello drowsy, I would like some help. I got the adapters you used, I followed your tutorial, the Eth adapters appear on the system but the links are down … Need some specific software or configuration in windows 10? Did you connect the cables directly between the adapters or on a switch?

Direct connect is as described. You must remember however, Windows is horribly stupid, and needs to be told what to do. You need to set the interface with static ip addressing as described above if that’s the tutorial you used. Just set IPv4, disable IPv6.

Thanks “faust”, my mistake was in the incompatible gbics. I tested with DAC cables and it worked.
I will buy the gbics that are in the compatibility table, because the distance of my Freenas is 15 meters.
Please forgive google for the terrible translation.

please specify exactly what settings and where for the HP NIC Tunables.
is it loader, rc or sysctl?
reboot after?

Hello everyone. first forgive goolge.translate for the bad translation …

I have a new problem …

When I access the NAS through ip 192.168.10.240, I can “see” the contents of the folders.

When I try ip ip 172.17.12.10, (NC523SFP) I can get to the folders, but I can’t open them.

Loader
yes, reboot after

@XRBR. Thank you so much for the reply with photos. I will go home and try that tonight. Thanks again.

Hello.
I installed windows server 2012 on the same computer I use freenas, configured the nc523sfp adapter equally to windows 10 and was able to transfer files. But the problem persists in freenas.

Hello,
I got the same nics, im trying to use it in ESXI 6.7, it shows up in the hardware section but under network its not there.
would you know what i can do to get it working?
Cheers

A 10G question:
Has anyone had any luck with 30m+ runs of cat6 using SFP+ 10Gbe -> RJ45 adapters?

I had an office setup working fine on 10G with a 10Gbe RJ45 Cisco Switch, but when that eventually died, I replaced it with an SFP+ only switch & SFP+ -> RJ45 adapters, but found that my links would no longer link at 10G, only 1G.

Could be a lot of things, I’m not 100% sure that the in-wall cabling is cat6 (though that’s what was specified in the build)

Should the 10Gbe RJ45 SFP+ adapters perform worse than a switch with built in 10G RJ45 ports?

What was the model of the SFP+ RJ45 modules? Is it possible you bought 1Gb ones instead of 10Gb?

I would not expect a difference in link distance ability just because you are using modules versus builtin ports.

Hi Im trying to install adapter drivers on my pc (win10) as you described but no luck, cause there’s tons of drivers maby im using the wrong one. Can You be more specific which driver from the site u linked u have used?

Hi I have been having a heck of a time finding an SFP+ to rj45 module that works in my network. I found your website via recommendations by users in forums. Perhaps you can help me purchase the right gear?

Over the last couple years I have watched Tom’s videos and then decided to get out the wallet for my home lab. I was running DAC just fine between the XCP-NG server on a Dell R710 and my Supermicro FreeNAS box. Both servers are next t each other so that was great, direct attach no switch needed. Then I tried a rj45 module in the chelsio card that I have in FreeNAS, no carrier. So I decided I would get a switch, in the hopes it would give the boost of power needed for SFP+ to RJ45… so far no dice. Migrating my network with Tom has not worked out as well as I had hoped. Any advice is much welcome, I tried to put as much info as I can in ths post… Hopefully it is not too rambley.

The switch is a Mikrotik CRS305-1G-4S+IN
There are three NICs I need to communicate with
2x Chelsio 10GB 2-Port PCI-e OPT Adapter Card 110-1088-30 (2 SFP+ ports)
1x Aquantia AQC107 which is an onboard NIC for the ASRock x470 Taichi Ultimate x470 Motherboard

I also have DAC Cables 2m and 1m I am using 10G SFP+ DAC Cable - 10GBASE-CU Passive Direct Attach Copper Twinax SFP Cable for Cisco SFP-H10GB-CU2M, Ubiquiti, D-link, Supermicro, Netgear, Mikrotik, Open Switch Devices,

These DAC cables work with the switch and can ping the dell R710 host and the Supermicro FreeNAS host. I can not ping a host connected to the same switch with a SFP+ to rj45 adapter.

I have tried the following modules:
10gtek
SFP+ to RJ45 Copper Module - 10GBase-T Transceiver for Cisco SFP-10G-T-S, Ubiquiti UF-RJ45-10G, Netgear, D-Link, Supermicro, TP-Link, Broadcom, Linksys, up to 30m

ipolex
10G SFP+ RJ45 Copper Transceiver, 10GBase-T Module for Cisco SFP-10G-T-S, Ubiquiti, D- Link, Supermicro, Netgear, Mikrotik (Cat6a/7, 30-Meter)

and

fibay
10G SFP+ to RJ45 for Cisco SFP-10G-T Netgear AXM765 Mikrotik Finisar 10GBase-T SFP+ Transceiver Module Copper, Cat6a/7, 30M

I can get a DHCP lease on these but they will not ping out to anything else on the switch.

Ideally I would direct attach without the switch but (inform me if I am incorrect) these old Chelsio NICs do not have compatiable SFP+ to rj45 modules because the card does not supply enough power.

To connect the RJ-45 runs I have flexboot cat6 cables from monoprice, they are only 5 foot runs while I am testing.

I have cat6 but the run is too long and jumpers through a cat6 patch panel and two wall jacks to get from NAS to Workstation which is too much for 10gbe plus the run is approx 60 meters.

The run will be approx 55-60 meters and I will upgrade that run to cat6a or cat 7 depending on your recommendation. I could also go fiber if that works better?

I can put a 10GBe card in my workstation with SFP+. The supermicro has all it’s PCIE slots taken so I only have the option of replacing the chelsio card with something else like an intel x540-T2.

The switch can be part of the mix or taken out and just direct attach SFP+ or 10Gbase-T cables. All the runs are short the problem comes in with the workstation being about 200 feet away.