10 gbe PCIe 2 x8 NIC...?

I am looking at building a pfsense router using a small firm factor PC and I’ve been investigating whether it would be cost effective to buy a 10gbe NIC. I have found several that offer 2x SFP+ ports, but they are PCIe 2.0 x8 cards. According to Wikipedia, the maximum throughput for a a PCIe 2.0 x8 slot is 4.0 GB/s. While the transceivers may be able to negotiate a 10 Gbps link, the actual throughput would be limited to the max theoretical throughput of the PCIe interface, right?

So to clear some things up here and to address gigabit vs gigabyte. Theoretically 10 gigabit converts to 1.25 gigabytes of data transfer rate. So am 8x slot is fully capable of these speeds. Is it worth the 10 gigabit NICs? Depends on use case. Maybe for internal use if you have a 10 gigabit switch and a have high transfer rates on your network. As for the wan side of things most ISP’s sell 1gigabit circuits so if you just want to go all out and just buy these parts then it will work but, possibly overkill.

Of course–completely overlooked the difference of bits vs bytes! I have a 1gbe max download internet connection, and I am building out a low-cost, low power home virtualization lab with an SSD backed NAS for centralized storage. I don’t want network throughput to be a limiting factor and I’m shopping for an affordable 10gbe switch but I’m aware that 10gbe may very well be overkill in a lot of situations. In any case, this is part of a larger project that is for fun and learning, so in that regard it’s all overkill, really! Haha

Thank you for clarifying that for me!

I suppose it depends on your budget and objectives but 10G is still expensive, while those SFP+ modules are eye-wateringly expensive. Personally I’d allocate more budget for RAM in a lab and more ports on a new main switch. You’ll be surprised once you fall down this rabbit hole how your scope balloons !

Unless you are putting another 10gb card in another device on the network and the time to transfer between those two devices is critical…

or…

you know you are going to be running 10 off machines all pulling 1gb each, at the same time AND transfer time on those devices is critial

then

just use gig and maybe do some port bonding / link aggregation / trunking.

This is for a home lab where I will have a centralized NAS that is going to host an iSCSI XCP-NG storage repository to store VM drives, etc. Because this is mostly a hobby and self-paced educational effort in my spare time, I’m willing to spend a bit more money to not have to sit and twiddle my thumbs if I need to transfer entire drives, etc. Please correct me if I’m wrong or my logic is oversimplified, but since it will mostly be me and some VMs communicating, LACP for example will only provide additional throughput when multiple VMs are communicating. If I am moving files to/from my NAS and my laptop, there won’t be much gain from LACP since all traffic will be via the same MAC/IP and therefore only one link of the aggregation group will be utilized. But, if I have one high throughput link (and I have the storage speed to back it) I will be able to see some benefit.

So in this case if it’s just for lab I wouldn’t route storage traffic through the pfsense and I would just run a 10gig link from box to box and bypass pfsense. Now, sure it can be done if you want to but, I’m all about simplicity and if it were me I wouldn’t be wanting to create new VLAN’s and all that mess and have the extra overhead on your pfsense box. Typically data centers setup 5k and 9k switches for storage traffic from the SAN to the hypervisor boxes. Just to give you an idea.

I agree with you in that if this were in a data center and I have racks of SANs and hypervisor’s, I would just do all the 10gig SAN traffic at L2, both for sale of simplicity and for sake of security: I don’t need a firewall if all my devices that need to communicate are on the same L2 / L3.

I am aware that I will need to upgrade my storage throughput before I can even determine whether I will be able to saturate gigabit links for my personal data and device needs which would be outside the scope of my SAN traffic. That being said, I also know that even though this is for my home lab, I would be much better off with my personal NAS and home lab SAN were NOT in the same chassis :man_facepalming:

This discussion was intended to be simply be a sanity check for my understanding of numbers that could actually be expected given 10gb link and PCIe 2.0 x8 slot — I’m not made of money despite what my hypothetical musings might suggest. Nonetheless, I have a soft-spot for brainstorming contrived hypothetical ideas.

:crazy_face:

Thank you for talking this out with me!

Oh my fault. Sometimes I get hung up on something and I can get carried away and get off topic. I think you are wanting to do will be fine. You’ll play and learn which in my opinion is really good thing. I also tinker on hardware and software so that I can improve my IT department.

So, you will for sure get better performance from 1 x 10gb link rather than 10 x 1gb links (all be it with reduced redundancy).

You will be able to max out both ports of the 10gb NIC on a x8 as @xMAXIMUSx suggested but you will also need some fairly well setup disks to come anywhere close I think.

10gb kit is still relatively expensive but “expensive” is a personal and case specific thing so if you are looking at the 10gb kit and thinking it’s NOT that expensive then go for it.

My suggestion would be get it up and running on 1gb, see if you are maxing things out and if the wait time is a problem and if it is then grab the 10gb kit.

always good to get another set of eyes to look at things

@xMAXIMUSx, no need to apologize! I’m grateful for the feedback and additional perspective to help keep my ideas relatively sane since I like to imagine scenarios that are over the top.
10gbe in my lab is like using a Lamborghini as my daily driver, only more affordable: probably unnecessary, but it sure would be cool! :slight_smile:

@garethw, that is exactly my plan: my NAS already has dual gigabit NICs, so I’m looking to replace my old, salvaged HP storage array with some SATA SSDs then run some tests and see whether I am able to saturate my gigabit links. If I’m able to find a bottleneck at the link(s) I’ll consider upgrading to 10gbe.

Thank you both again for the feedback! My girlfriend’s eyes glaze over and roll back in her head when I start talking about this stuff with her… haha

Not sure if this is best asked in a new topic, but does anyone have any recommendations of reputable resellers? There’s plenty of resellers on ebay, amazon, and newegg, and other websites, but I have a hard time not being paranoid about being ripped off.

Any suggestions? Recommendations?

I’m UK sorry so can’t offer much help there. I use “proper” hardware suppliers for important / customer stuff and Amazon (not marketplace) for some bits.

Amazon marketplace and ebay for stuff that I am willing to take a bit of a risk with. Check the description carefully on ebay and if what turns up is not what they said or not in the condition they said then send it back!

If I do use ebay for stuff that goes to customers then I make sure I make them very very aware that I’m getting it on ebay because [cost/availability/whatever] and that this might be an issue.

Calling SFP+ modules expensive these days ? When looking at the originals, yes, but going for OEM: no - optics from fs.com are extremely cheap, and highly recommended on the FreeNAS/TrueNAS forums

Ok in my case I didn’t do my homework and ended up paying too much. I added 2xSFP modules @ 2x£20 to a PoE switch costing £120. At the time I couldn’t find a cheap enough PoE 16 Port switch hence went that route. For GB modules I can’t see why they weren’t £10 each, oddly enough I bought them from fs.com too.

Thanks for the link, @c77dk! I’ll check out fs.com