I am looking for advice on what kind of server hardware is required to run a bare-metal pfSense installation that will be able to route 10Gbps links traffic without the pfSense router becoming the bottleneck.
In case the following information helps, I plan to use a 1U Dell PowerEdge, ideally generation 13, since I like DRAC for IPMI, don’t want to go below DRACv8, and the gen 13 hardware has sufficiently dropped in price to make the expenditure at least conceivable. I.e. a model Dell PowerEdge R[2-4]30.
But which internal specs?
The pfSense box must be able to route between two internal 10Gbps NICs today. My uplinks are currently only 2x1Gbps, but I hope that will change to 1x10Gbps soon and the router should be able to handle that without any upgrades other than sticking in second NIC.
Suggestions, pointers, and advice are much appreciated.
If you look at the specs of the pfSense hardware, you will find that they include 10Gbps NICs starting with an Atom C3558 and a Xeon D-1541 is where they top out. One thing to look at on the chart is the iPerf3 and IMIX performance limitations for each solution.
My thought would be to pick a cpu that at least equals the D-1541 on individual core performance and total performance. You’re probably going to want to look in the E5-2630 and E5-2640 ranges for v3 and v4 generations. In my mind, the lower core count, higher clock models will give you the best performance. However, they are also the most expensive. The E5-6243 v4 is $525+ used. Whereas the E5-2630 v3 is around $40.
4 GB of memory is antiquate, and I think that anything over 8 GB would be overkill. If you find that you need more for some reason, you’ll have plenty of slots to add another stick or two.
Hope that helps.
@mouseskowitz Thank you so much for your recommendations and especially for including your thinking behind them in your answer.
One followup question: Looking at the Netgate specs, I am noticing that IPSec performance appears on the low side, topping out at ~2.8Gbps using AES128-CBC.
How/if would your recommendations change if a large percentage of the uplink traffic were AES256-GCM IPSec tunnels? While likely infrequent, I can see total IPSec traffic burst to about 4Gbps before long. Thanks!
One thing to keep in mind is this:
One of the reasons they are working on the TNSR firewall over at Netgate is to get past some of the challenges of 10GB speeds in the FreeBSD kernel based system. There does come a point that if you want the absolute fastest 10GB+ routing speeds, pfsense becomes more challenging to get those speeds due to kernel design.
I believe that the speed limit on the encryption is due to be being limited to a single core for security reasons. So yes, frequency is key there. If your traffic is such that it can be split up between multiple tunnels you can increase your aggregate throughput that way. This white paper shows the ability to saturate a 10 gig link with six tunnels with a six core Westmere processor at 2.4 GHz. That’s several generations older than what you’re looking at. If I’m extrapolating things correctly, you should be able to saturate your bandwidth with multiple tunnels using something as “low end” as a E5-2630 v3. The question is what tuning will need to be done to the system and if two cpus will be needed to handle both the encryption and firewall loads.
As Tom noted, it looks like there are also some kernel limitations you might bump into if you push this system hard enough.
Big thanks to both of you for your well thought-through and helpful responses. I will need to look into the TNSR feature set.
As coincidence would have it, I realized about an hour ago that I had misread the pfSense roadmap and that MPTCP support isn’t coming in pfSense 2.6, as I thought I had read previously.
Instead, MPTCP support is scheduled for some undefined future pfsense version. MPTCP was going to be one of the drivers of my anticipated increase in IPSec use.
Either way, I still need to put a physical router in front of a couple of personal machines for which I route /24s, so the hardware recommenations were super useful and I’ll start researching price ranges now that I understand specs.
I will start a new thread (if what I now wish to know hasn’t already been answered and I find it in a search), since my follow-up set of questions belongs under a different topic than this thread. Thanks again!
If you want to route 10 Gbps traffic with few rules and no inspections whatsoever, a typical general purpose cpu is ok (aka the Xeon listed on the Netgate forums for their firewall). If you want to add more than just routing on the same traffic, then dedicated SoC is required at that speed to it remains relevant.
Even though pfSense is my favorite firewall, I would look at dedicated hardware from CheckPoint or Netgate to do what you want. pfSense is relying completely on software run by the cpu for its performance and thus takes a hit for encryption/decryption/filtering/IPS/IDS/etc.
If you are ready to invest in a 1U Dell, you could easily just use that money on a dedicated hardware firewall instead that will require a lot less care and maintenance and will provice superior throughput.
1U dell poweredge for pfsense? Seems overkill unless you are routing a ton of traffic? Also, if you can direct attach the servers together? Do that. It’s cheap and works. Tom did a video on it a few years back. Search 10 gig on the channel.
Of course if you require 10gig routing then yeah you may need to think new software kernel not just new hardware.