Odd speed differences on a Unifi / pfSense setup

Hey everyone

I have a strange issue with bandwidth on my multi switch Unifi and pfSense setup which is starting to annoy me.

I have the following

4x48 port gen2 Unifi switches connected via ubiquiti 10gb DAC to a unifi aggregation switch. This is the connected to my Negate 6100 currently via sfp on a 1gb connection.

I have multiple Vlans setup for people, and everything is ticking along nicely. However

Some people see 700/900 speeds, and some see 200/200 or less speeds.

Tips on where to look and how to go about diagnosis please. I have already tried the following

Changing cables all the way through
Changing port in the office
Testing the ports and links between floors

All the usual stuff

Btw.

There are no limiters or load balancing configured here

The only difference is some have static IP assigned through NAT AND VIPs. And some just use the regulate direct gateway

What does iperf say?

@LTS_Tom … Cue learning iperf … knowledge loading

I just had a somewhat similar topic but with different hardware.
So you are routing all your traffic through the PFSense Firewall too?

First of all the IPerf suggestion is a good start to get good measurements going.
If you are on a Windows Machine you can download IPerf on the following Website:

Then you’ll have to activate the IPerf Server on your Firewall under:
Diagnostics > iperf > IPerf Server

I usually enter the default Port Number in the Field there which is 5201 just for good measure and spin up the daemon.

On your Windows machine you then have to open Powershell and CD into the Directory where the IPerf.exe is located after downloading.

Then you type the following into the Shell:
./iperf3.exe -c 111.111.111.111
and hit enter…just replace the 1’s with the IP of your Firewall where the Server is running.

You can also add the switch -P with a number to it which runs more streams in parallel

If you got any other questions check out my post maybe you find something interesting.

I will also mark your post here to get notified.

1 Like

@Localhost Epic reply thank you so much. I have just setup the pfSense box ahead of going in this week with iPerf on my laptop to try this out.

Updates will follow

@LTS_Tom

Ive been sick so never got in to the office until now, however i have run some iPerf3 testing with single and multi thread connections, and from what i understand of a 1Gbit connection what i’m seeing is about right. Maybe you can confirm.

Old Port
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 106 MBytes 88.6 Mbits/sec sender
[ 4] 0.00-10.00 sec 106 MBytes 88.6 Mbits/sec receiver
[ 6] 0.00-10.00 sec 105 MBytes 88.2 Mbits/sec sender
[ 6] 0.00-10.00 sec 105 MBytes 88.2 Mbits/sec receiver
[ 8] 0.00-10.00 sec 106 MBytes 88.5 Mbits/sec sender
[ 8] 0.00-10.00 sec 106 MBytes 88.5 Mbits/sec receiver
[ 10] 0.00-10.00 sec 106 MBytes 88.5 Mbits/sec sender
[ 10] 0.00-10.00 sec 106 MBytes 88.5 Mbits/sec receiver
[ 12] 0.00-10.00 sec 105 MBytes 88.4 Mbits/sec sender
[ 12] 0.00-10.00 sec 105 MBytes 88.4 Mbits/sec receiver
[ 14] 0.00-10.00 sec 105 MBytes 88.3 Mbits/sec sender
[ 14] 0.00-10.00 sec 105 MBytes 88.3 Mbits/sec receiver
[ 16] 0.00-10.00 sec 105 MBytes 88.3 Mbits/sec sender
[ 16] 0.00-10.00 sec 105 MBytes 88.3 Mbits/sec receiver
[ 18] 0.00-10.00 sec 105 MBytes 88.3 Mbits/sec sender
[ 18] 0.00-10.00 sec 105 MBytes 88.3 Mbits/sec receiver
[ 20] 0.00-10.00 sec 105 MBytes 88.2 Mbits/sec sender
[ 20] 0.00-10.00 sec 105 MBytes 88.2 Mbits/sec receiver
[ 22] 0.00-10.00 sec 105 MBytes 88.1 Mbits/sec sender
[ 22] 0.00-10.00 sec 105 MBytes 88.1 Mbits/sec receiver
[SUM] 0.00-10.00 sec 1.03 GBytes 883 Mbits/sec sender
[SUM] 0.00-10.00 sec 1.03 GBytes 883 Mbits/sec receiver
CPU Utilization: local/sender 8.1% (2.0%u/6.1%s), remote/receiver 84.2% (8.2%u/76.0%s)

New Port
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 107 MBytes 89.7 Mbits/sec sender
[ 4] 0.00-10.00 sec 107 MBytes 89.6 Mbits/sec receiver
[ 6] 0.00-10.00 sec 104 MBytes 86.8 Mbits/sec sender
[ 6] 0.00-10.00 sec 103 MBytes 86.6 Mbits/sec receiver
[ 8] 0.00-10.00 sec 102 MBytes 85.7 Mbits/sec sender
[ 8] 0.00-10.00 sec 102 MBytes 85.6 Mbits/sec receiver
[ 10] 0.00-10.00 sec 103 MBytes 86.3 Mbits/sec sender
[ 10] 0.00-10.00 sec 103 MBytes 86.1 Mbits/sec receiver
[ 12] 0.00-10.00 sec 97.0 MBytes 81.3 Mbits/sec sender
[ 12] 0.00-10.00 sec 96.9 MBytes 81.2 Mbits/sec receiver
[ 14] 0.00-10.00 sec 105 MBytes 87.9 Mbits/sec sender
[ 14] 0.00-10.00 sec 105 MBytes 87.8 Mbits/sec receiver
[ 16] 0.00-10.00 sec 112 MBytes 94.1 Mbits/sec sender
[ 16] 0.00-10.00 sec 112 MBytes 94.0 Mbits/sec receiver
[ 18] 0.00-10.00 sec 104 MBytes 87.3 Mbits/sec sender
[ 18] 0.00-10.00 sec 104 MBytes 87.2 Mbits/sec receiver
[ 20] 0.00-10.00 sec 107 MBytes 89.8 Mbits/sec sender
[ 20] 0.00-10.00 sec 107 MBytes 89.7 Mbits/sec receiver
[ 22] 0.00-10.00 sec 106 MBytes 88.5 Mbits/sec sender
[ 22] 0.00-10.00 sec 105 MBytes 88.3 Mbits/sec receiver
[SUM] 0.00-10.00 sec 1.02 GBytes 878 Mbits/sec sender
[SUM] 0.00-10.00 sec 1.02 GBytes 876 Mbits/sec receiver
CPU Utilization: local/sender 8.3% (3.4%u/4.9%s), remote/receiver 76.4% (8.5%u/67.9%s)

With this throughput im not sure what to suggest

Their cameras are still freezing and generally cloudflare speed test is very erratic as to results.

I had a very similar experience on very similar hardware. Netgate (pfSense+) 6100 connected to a Unify 24-port POE Enterprise switch. Both Netgate and Unifi connected via 10G SFP+. My TCP download performance on a LAN-attached PC to the Netgate was line speed from my ISP (1.2GB). The same test run from the Unifi switch was always around 25% of the WAN download.

I ran many test. Swapped SPF+ DAC cables from one purchased from Netgate to one purchased from Unifi. Test were identical. I then experimented moving the pfSense–>Unifi downlink from SFP+ (DAC) to 2.5GB copper (RJ45) on another port. Same results.

The ONLY thing that restored my network to “normal” throughput everywhere was reducing the Netgate–>Unifi downlink to 1GB. This obviously eliminates any 10GB potential benefit to the Netgate/Unifi switch interface, but with all my PCs and other LAN devices at 1GB or 100MB, it is OK.

I read a great deal about pfSense Traffic Shaping, Switch Flow Control, TCP windowing, and other possible fixes, but they did not directly address the topic.

For the benefit of all, I am hopeful that someone with more experience may be able to shed light on the beat way to configure our LANs when we have slower WAN, to 10GB (higher speed) pfSense (router) and Unifi (switch), with 1GB or slower LAN connections (which for me include VLANs much like UK_TechDad described).

Tom or anyone else, can you suggest the optimal way to configure this pfSense/Unifi high-speed middle component with lower speed WAN and lower speed hosts? The performance bump in the middle (pfSense/Unifi) causes more harm than good without some throttling/tuning.

Thanks in advance for anyone’s help. UK_TechDad: I fully feel your pain after almost 2 weeks of my own trial-and-error.

1 Like