Advise on NAS VLAN

I know the rule of thumb is Don’t route storage, so I need some advice on how to set this up.
So to keep it simple I have Default, Server, IOT Vlans.
I have two new xcp-ng servers that i’m setting up, and I have a separate Truenas server.
Is it best to set the switch ports that all three of these servers connect to, to the Server vlan and call it a day? Or is it best practice to set them all to the default, and then just set my VMs to the server vlan from within xcp-ng?
All three of these servers are connected over sfp+ 10gb, and my router is PFSense over 1gb Ethernet. I would like to use Truenas NFS for shared storage.
Any recommendations to best practices in a homelab for this type of use case?
Please let me know if I left out any helpful details.
Thanks,

Joe

I am not sure if it is a good practice or not, but I have my TrueNAS as well as my Synology connected to multiple VLANs. In the case of the Synology, I have two physical connections, in the case of TrueNAS it is virtualized and Proxmox provides multiple virtual NICs. I then restrict my shares to specific subnets for each VLAN in question or even to specific IP addresses. I don’t use the default VLAN (VLAN 1) for anything as it is a bad security practice. If your TrueNAS box doesn’t have multiple physical NICs I believe there is a way to set up virtual NICs in TrueNAS, but I have no experience doing it.

As far as the switch ports, I think you have to make them all trunked/untagged ports so that they will pass all of the VLANs to xcp-ng, and then you can assign VLANs to specific VMs. That’s the way I do it in Proxmox, but I have no experience with xcp-ng. I assume it is similar.

I definitely appreciate the guidance!
My Truenas has 2 physical connections, I was thinking about setting up a LAGG, though at the current time I don’t think I need that much bandwidth. I could do 2 vlans instead.
So do you have one connection on the same vlan as your proxmox, and the other on the same vlan as your workstation (as an example)?
Currently I do have my workstation on the “server” VLAN so I my storage wouldn’t route. You do bring up a good point about not using the default vlan, I really need to work on moving stuff off of it.

I found for my workloads a LAGG wasn’t worth the overhead and latency that it added. My connections seem faster without it.

I have six VLANs total: trusted, untrusted, guest, IOT, television, and management. Guest is for my kids and any actual guests. IOT is for the Ring cameras and Ring alarm system. Television is all my streaming devices. Untrusted is anything that is public internet facing like my cloudflare tunnel connector, my wordpress websites, nextcloud, etc. Trusted is my computer and my wife’s computer. Management is my Proxmox web UI, the UI for my switch, my WAP, etc. Trusted can traverse any other VLAN. All the other VLANs are completely segregated, some going out through VPN (television), some with ip filtering and ad blocking, some without. I basically have two sets of data on my NAS machines: data that is accessed from the trusted VLAN (such as pictures, financial docs etc.) and data accessed from the untrusted VLAN (NFS shares for Proxmox, docker, etc.). I don’t want to have firewall rules to allow access to the NAS machines from the untrusted VLAN, as that slows things down. By having the NAS on both VLANs, the traffic only goes to my L2/L3 switch, and pfSense doesn’t have to do any inter VLAN routing. Its much faster that way.

1 Like

One way to keep the NAS away from everything but the hypervisors is having a separate switch connecting these for storage traffic, while each machine has 2+ interfaces, dedicating 1 interface per machine for sorage traffic and using the other interfaces for other VLANs.

This way you have security and speed for shared hypervisor storage.

If you need data shares that you mount into other machines or VMs you can identify for which VLANs you need shares and then create virtual file servers as VMs which you place in those VLANs, and export NFS from there, also no need for routing. This has a bit of overhead, but storage traffic and data shares will need no routing. Note that in this setup no data shares are mounted from the NAS from other machines than the hypervisors.

The selection of VLANs and what the permissions are is highly dependent on your use cases and user base and risk assessment / risk appetite. Since none of that was described, proposing any design is a shot in the dark.

This to consider are your use cases, the devices you have, intensive data streams you expect to have between them and to the internet. Also according to your policy you may want to blacklist Internet destinations for some VLANs and whitelist other Internet destinations for other VLANs. As a limited example consider this:

  • IoT VLAN policy - no Internet access, whitelist external update servers (as those are few and can be easily determined)

  • entertainment VLAN policy - full Internet access, blacklist selected external destinations (as e.g. FireOS, PlayStation access so many changing destinations it will be a whack-a-mole game to whitelist them)

You may have needs for VLANs for secure infrastructure admin consoles, exposed DMZ servers, machines with outbound routing via VPN, guest device VLAN, servers accessible from a bunch of VLANs, servers reachable only from DMZ, etc. Depending on your risk modeling this can become quite fine-grained and detailed, so do not expect that any advice here will fit your needs, especially as long as you have not sat down to understand your needs.

Thank you very much, I appreciate you taking the time to help me out.
I really think my current NAS is not spec’d high enough to be a proper storage server for my VMs. I need to do some research on what those specs need to be, and possibly just building one out to just be shared storage, and attached to the same switch as my hypervisors may be a great option for that.
I Definitely a lot to think about regarding my overall VLAN policy. I will work on that as well.
Thanks again!

I forgot to mention WHY I have a separate switch exclusively for storage traffic: If you want to update the main switches firmware or need to power it down for some reason you would also need to shut down all VMs that use virtual disks on the NAS, otherwise they may get shredded. WIth a small separate switch for storage traffic you only need to shut down all VMs if you want to power down or update the storage traffic switch. WIll cost you not much, but at some point you will be -very- grateful to have spent the extra bucks.

1 Like

That is WAY too much trouble for my taste. I don’t want to keep my NAS away from everything, in the first place. I just want to keep the traffic on my L2/L3 switch and not have to have it routed by pfSense. I only have a couple of VMs using NFS shares from the NAS. All VMs themselves are stored locally on my Proxmox servers. Most of my workloads are dockerized and access the NAS using the Docker NFS driver. The only thing my hypervisors use the NAS for is backing up my VM images.

I’m in the middle of rebuilding my network following the don’t route storage principle.

Data VLAN

This will have three servers: NAS 1, NAS 1 backup, and an application server (XCP-NG). All three are built with Supermicro motherboards with three Ethernet ports. One for IPMI, and two to do what you want with. One of those Ethernet ports get assigned the web configuration interface on each server.

A direct connection from my primary NAS to the backup NAS. That will be 10Gb. The backup NAS will pull snapshots from the primary NAS. It will not have any other data connections.

There’s no internet access from the Data VLAN, and no DHCP server. All IP addresses are hardcoded on each of the three servers.

The primary NAS will also have a direct 10Gb direct connection to the application server. If I had extra money burning a whole in my pocket, I might buy a 10Gb switch rather than doing the direct connections.

All three servers have their IPMI and web interface ports on a Mgmt VLAN. There’s no internet access from that VLAN, and no DHCP server. All IP addresses are hardcoded.

The primary NAS server will also have an Ethernet connection on the Computer VLAN. It’s what most people would consider their trusted network. It will offer file sharing and nothing else.

The application server is where “don’t route storage gets really interesting. As mentioned above, it has IPMI and an Ethernet connection for the web UI on the Mgmt VLAN. It will also have Ethernet connections on three other VLANs:

  • Computer (trusted)
  • Craptastic (everything untrusted)
  • ?

Craptastic includes tv and streaming devices, phones, cameras, and anything else that needs to talk to the Internet to work.

There is one other VLAN that the application server will be connected to, but I’m waiting in my dentist’s office, and don’t have my notes handy. Clearly it must be the most important VLAN.:man_shrugging:t2:

If this plan makes no sense to you, it’s most likely the plan, and not you.

Edit:

The Mgmt VLAN can be on one of your witches that carries other traffic, or a separate switch. It’s convenient to just leave it on one of your regular switches. OTOH, that’s six Ethernet poles rts that I can free up if I’m running out of space

I know the general rule is to avoid routing storage traffic, so I’m looking for advice on the best way to set this up.

Here’s my setup: I have Default, Server, and IoT VLANs. I’m configuring two new XCP-ng servers and a separate TrueNAS server.

Should I just set the switch ports connecting all three servers to the Server VLAN and leave it at that? Or is it better practice to keep the switch ports on the Default VLAN, then assign the Server VLAN to the VMs inside XCP-ng?

All three servers connect via SFP+ 10Gb, while my router (PFSense) is on 1Gb Ethernet. I want to use TrueNAS NFS for shared storage.

Any recommendations or best practices for a homelab in this scenario? Please let me know if I missed any important details.

Thanks!

Can’t comment on your idea of assigning the VMs to the server VLAN while the servers themselves remain on the default VLAN. Don’t know enough to know if that’s a good idea or not. :man_shrugging:t2:

To me, it would be normal to put all your servers on your Server VLAN. Then if you going to access anything on your TrueNAS server (NFS shares), or your XCP-ng server (applications) from a different VLAN, and you want to avoid routing your storage, you will need an additional LAN port on each server for each additional VLAN you want to connect to.

So your TrueNAS server will need a second LAN port on the Default VLAN, in addition to the one it has on the Server VLAN, so that the NFS shares are available to the computers on the Default VLAN.

If your XCP-ng server is going to be accessed from devices on your IOT VLAN and your Default VLAN, you’ll need two more two more LAN ports in addition to the one it already has on the Server VLAN.

Whether those additional ports need to be 1Gb or 10Gb, RJ45 or SFP/SFP+, is entirely dependent on what you’re trying to do.

Maybe laying it out like this will help to make it clear:

TrueNAS server LAN ports

  • Server VLAN (SFP+)
  • Default VLAN (?)

XCP-ng server LAN ports

  • Server VLAN (SFP+)
  • Default VLAN (?)
  • IOT VLAN (?)

etc


If you scroll up, I’ve got a long comment in this thread from a couple of days ago explaining the details of my layout. Like you, I’ve got three servers, although it’s 2 TrueNAS and 1 XCP-ng. I don’t route anything. But to do that, my three servers are going to use 11 or 12 network connections just for them.

HTH

this is incorrect. you can have several hundred VLANs on just 1 physical LAN port.

yoou setup is useful to have good performance. It does not help a lot for security because you have shifted the segragtion from the network to the NAS and to the application server, and if app VMs have a VLAN interface into the data VLAN you als need to consider each such VM for segragation. What I mean is that now each vulnerability in any of these systems would allow an attack to bridge from the less trusted VLAN they come from into the data VLAN or management VLAN.

If this is no concern you have, your setup looks good.

The latter wouldn’t work but I guess it is not what you really meant to say. You probably meant these two different cases:

(1) assign the VLAN to the switch ports where NAS and XCP connect physically to the switch and not use VLANs on the (virtual) interfaces of the NAS and of the VMs.

(2) assign a trunk to the physical ports on the switch where NAS and XCP are connected and assign the server VLAN to the NAS interface and the virtual VM interfaces on XCP.

I would also be interested in the pros and cons, specifically for perfmance, on these two cases.

I assume (2) could have an advantage if VMs communicate to each other on the server VLAN as that traffic would hopefully stay on the virtual switch of the hypervisor and not be going through the physical switch.