Painfully slow FreeNAS Performance via 10GbE

I have 2x 10GbE FreeNAS machines which perform poorly if the speeds others get are accurate
(and I assume they are).

To test transfer speeds on the unit which doesn’t yet have data,
I striped 7x 6TB 7200 rpm 100MB/s (System 1, below: MacPro 3,1)

I have however, transferred data between my Windows machine and my MacBook Pro (ATTO 10GbE to Thunderbolt – ThunderLink) and was able to obtain over 400MB/s transfering to and from NVMe drives.

I just can NOT figure out how to make FreeNAS generate much throughput.

I’m considering just booting the MacPro in to OS X and setting it up in a striped array to test the performance via OS X … to troubleshoot some of the hardware or software that way…

Any help would be appreciated. If you’d like me to provide info, can you please make it easy for me to understand which information you’d like…? Ideally, request a screenshot and define exactly where I’d go to provide said screenshot.

CONFIGURATION:

NETWORKING:
• SWITCH 1: D Link DXS-1210-12SC 10GbE Switch SFP+
• SWITCH 2: Airport Extreme + AP Express extender

WiFi via Airport + Extender
SMB sharing
10GbE via SFP+ Fiber

  1. MacPro3,1 Dual Xeon CPU
    • 2x 2.8 GHz - Quad Xeon (X5482)
    • 16GB ECC RAM
    • 7x HGST Ultrastar 6TB SAS 7200 rpm
    • LSI 9211-8i SAS controller
    • 10GbE SFP+ Network card (shows connected in FreeNAS)
    • TrueNAS core 12.0 Beta
    LESS THAN 100 MB/s … even with 7x striped drives with no data.

  2. Dell PowerEdge T320 (single CPU)
    • FreeNAS Version: 11.3-U2.1
    • E5-2403 0 @ 1.80GHz
    • 48GB DDR3 ECC RAM
    • 10 GbE SFP+
    • SMB Sharing
    • LSI 9211-8i SAS controller
    • 8x 10TB HGST IBM Ultrastar SAS 7200 rpm
    • RAIDZ 2
    • Fastest transfer speeds up to 180 MB/s

Does this sound correct…?

IN ORDER TO PROVIDE ANY REQUESTED INFO, PLEASE STATE SCREENSHOTS TO ATTACH

THANK YOU!!! :slight_smile:

I can’t help with the problem, but on big files like a long video clip, I get around 112MBPS to my main freenas box. The server has aggregated 10gbe connections, and the workstation a 1gbe connection. That’s essentially a saturated gigabit speed. Nothing special about my server either, just on board drive controller and eight SATA HGST 10tb drives in raidz configuration.

So you’re getting 112MB/s over 10GbE …?
via SMB… and of course 10GbE on the FreeNAS and client, yes…?

No, that’s from the workstation which only has a 1gbe connection to the switch. Lots of headroom for lots of other workstations in my system. I haven’t ever tested how many workstations I can have running this much data at one time. The most I ever had around 150MBps in a video editing test of 10 clients, 5 video streams each and an average of just over 120MBps with peaks up to 150. This is our worst case situation test, if it passes this, then our normal 2 streams will be fine. I only have dual 10gbe interfaces for redundancy.

How many stripes or mirrors …? I’d assume your bottleneck would be spinning drives; not the network…no…?

Definitely my bottleneck will be drives. If I’m ever able to work in the office again, I need to try higher bit rate video files and see what my ultimate maximum will be. All 8 drives are in the same raidz which will be about as good as it can get without sacrificing more capacity.