I’ve been migrating my home setup to a single server box and virtualizing everything. I’m also upgrading to 10Gb for my LAN backbone and workstations. I want to keep the unifi setup as I like the common interface and USG monitoring, but my USG is not up to the job of routing my VLAN.
I have created a pfSense VM which can route between VLANs. It has a single 10Gb interface.
I setup my USG purely for getting the WAN on to a VLAN.
Are there any issues / disadvantages to this approach? Technically it seems to work.
just wondering - if you’re just using the USG to put the WAN into a VLAN - why not just set the VLAN as native on the switch and plug WAN directly in (and make sure CDP and the like is disabled on the port)?
I think he said that he wanted to use the USG monitoring capabilities and the configuration interface of the device which is why he wants to keep the USG in play. My only thought is that if the USG is getting old there might be some through put speed issues on top of what throttling Pfsense is already doing to the WAN link. You might want to run some iperf type speed test with and without the USG as well as before and after Pfsense to get a feel on how those devices impact network traffic on your network.
I have a question about your Virtualization environment since I do not have any 10G at home and the only 10G networks I’ve seen were at work where everything was physical on the 10G network. I understand that the physical NIC will be running at 10G but for the internal virtual switches, will all of the VM’s (like Pfsense) connect via 1GB ethernet to the virtual switch? So that no matter what you put through Pfsense the theoretical maximum through put would be no more than 1 GB? Just curious…
That’s pretty much how I would have it setup I think.
There is some debate over running pfSense as a VM and it might be worth having a dedicated NIC for your “wan” vlan (you could then connect USG directly to pfSense VM) unless you have a very very good connection a gig port would suffice for that.
It’s also worth considering if you need to have your servers on a separate vlan from your devices. If it’s for lap reasons then that’s fine but if all of your traffic is crossing the vlan / subnet boarder from devices to servers then that might not be the best way to have your network setup. Good for security maybe but not so good for efficiency.
@garethw the reason for my question is that particular debate. I could not find a conclusive answer. security was my main consideration when putting my servers on a vlan. I can expose a limited number of ports and put admin interfaces etc. behind the firewall.
most devices have low throughput anyway and it’s actually more complex than the diagram …
I have a VLAN for CCTV and IoT, another for computer, laptop and phones , and one for guests.
I did think about putting my faster workstation / laptop on the same vlan as servers as it would be faster and less of a resource hog, but it’s not that often I dump large volumes of files, its just that when I do I wanted it to be a bit faster.
obviously if I want to get 10Gb speeds I’ll need to upgrade my trunk but while there are some performance limitations as people have suggested, I’m not seeing a major issue.
I think the main issue is making sure I configure my USG to ensure it isn’t doing any of its own routing etc.
I didn’t mention that my trunk connection is to XCP-NG. I’ve setup my VLANS in XCP-NG and attached them to the NIC. I then add all the network connections to pfSense.
One of the VLANS is my WAN VLAN. pfSense itself thinks it has a WAN interface and 4 LAN ports.
I think everything is working as expected. I got most of this from watching LawrenceSystems videos. I think they hive off their lab/demos in this way behind pfSense so I figured it should work for my WAN interface.
@daninmanchester I don’t think you will find a conclusive answer because both options are fine depending on how you work.
I like having the flexibility of having the pfSense box as a VM and being able to move it from one server to another without too much effort, it makes it redundant at the flick of a switch. Arguably buying two netgate appliances and setting them up in HA is a better solution but it’s also more expensive.
I would still tend to run servers on the vlan where most of the client’s that use them reside unless you are splitting up a larger organisation (say a school) into buildings or zones to reduce broadcast traffic in which case having a dedicated server network makes more sense. However, that’s my opinion and if your way works for you and doesn’t cause bottle necks then crack on.
Not had the need or chance to play with 10g yet so can’t really offer any advice on that.