I am old school, starting back in LanTastic and Novell. Setting up a Windows server for file and print was simple. Create a folder and share it. I have been using VMware since it was a thing but still setup Windows servers and file shares the same way. For example I created a 240GB C: drive for the OS and a many TB E: drive for the file shares. I know, bad bad in today’s world. It takes forever to backup and heaven forbid, restore. What is the best way to setup a Windows server today? It seems like a smaller Windows Active Directory server virtual machine (hypervisor of choice) and then create the massive shares on a NAS or SAN. I guess my question boils down to how do you share them? Do you attach the large share to the Windows server (nfs or iscsi) and share it out as if it was part of the server? Do you have Group Policy map the share directly to the NAS? The NAS would then need to be attached to the Active Directory to control permissions….
Any doc’s on today’s best practice? I am attempting to teach this ol’ dog new tricks….
There are multiple ways you can go about doing this, but my preferred way of handling large amounts data for SMB shares is setting up a physical truenas server and joining the server to your existing domain.
In this way you can setup zfs replication if you had another truenas for backups and you can take advantage of snapshots of the dataset and have the ability for shadow copies.
The second option would be that if you are still wanting a windows VM for your SMB shares then I would setup a new LUN from you iscsi and then attach it inside your windows VM. This way you can still snapshot and replicate with your current SAN. Assuming your SAN has that functionality. You’ll have to make sure whichever backup solution you use to ignore the iscsi drive.
From an end user perspective, you have a network share as a user and network share for your team/group/department. Without any business process determining how that should be structured, users will do whatever they want. It is virtually impossible to impose discipline, the moment you do, local shares are used instead.
Modern setups might be prettier but they haven’t changed vastly since NT4 for users. I’ve colleagues who struggle with version control let alone structure !
As for setting it up, if you are a windows shop, do whatever the MCSE books say, been a while since I’ve read them though.
I would most likely use Window File Services to manage the access to a share. If you plan to use a NAS/SAN, I would setup a connection via NFS/iSCSI from the server to the storage and then setup the file share the way you are used to doing it. You can backup the Volume/LUN separately if needed or maybe look at using Windows DFS for replication.
If using a NAS/SAN is it recommended to attach (nfs/iscsi) the disk directly to the Windows server or attach it to the hypervisor and create a disk in the Windows server that sits on that disk?
or
Create the shared folder on the storage and just group policy map a drive to it? Then the Windows active directory server is not handling the traffic, just permissions and mappings?
I wouldn’t attach it to the hypervisor. I would attach the iSCSI directly in the VM. That way your dataset isn’t going through too many hops, it’s less complicated and you have full control from your iSCSI target.
If it goes through the hypervisor then you have to manage the dataset from your iSCSI target then manage the hypervisor dataset. That doesn’t seem very efficient to me.
If you configure it through the hypervisor, you gain some benefits in security since you wouldn’t have any storage configs on the guest OS and you have a better managed setup for MPIO since you won’t have to stretch multiple networks up to the VM.
When it comes to shares, sharing directly from the NAS is ideal, assuming the NAS has at least equal access (bandwidth, etc.) to the network as your virtual hosts do. (This ignores any dedicated network that may exist that’s specific to your virtualisation storage targets, such as for iSCSI or NFS).
If your NAS lacks Active Directory awareness, then mounting (not sharing) to a VM can then become ideal, simply for the better ACL management and such. But once again, outside of some sort of network disparity, this would be the only reason I would share via a Windows VM.
Sharing through a VM creates additional overhead, and there’s no good way around that. So unless there are reasons that make using the VM as the share target better than the source, stick with direct from the source.
while I agree with what you said I’d like to point out that a VM that shares vdisks may be a better solution regarding security (not regarding performance as you said).
E.g. you may want to use the physical NAS/SAN singularly for hypervisor storage in an unrouted VLAN that only the hypervisors can access. You may want to keep other machines from accessing this storage due to security reasons (each machine with an interface in that network increases the attack surface). Also you don’t really want to route NAS traffic across your firewalls. Then it makes sense to have a virtual filer VM in each VLAN, where the vfilers store the data in a vdisk, and only the hypervisor has direct access to the NAS.
Ultimately it depends on what the use cases are and how much filer traffic is expected.
This is not just a theoretical idea. I am using such a setup for years and it is fine.
That’s a good point. I never thought of it that way at the time.
On a bit of a side note, this actually made me chuckle a little, as it’s just another reason why “It depends” is such a common answer to tech questions