Hello!
I just started working on a small IT business that take care of other small business IT. We implement basic versions of HP / Lenovo / Dell servers with windows server basically for some softwares and mostly for file sharing and management, so all the files are mapped (and users oriented of course) to be saved on the server shared folders with all the permissions configured, nothing new up to here, but he does get those servers and virtualize then using xen server (old version), and run the windows server virtualized, if he used it to make a snapshot of the whole image I guess it would be ok, so in case of failure could be easily deployed back, but he actually do a Windows VSS backup and saves it on a NAS on the network, I see this is a solution the owner of this business found some years ago and still repeats it, but it is not efficient.
I’m used to work with local network VM infrastructure management (using oVirt) and network infra, with beefier servers and multiple VMs (mostly linux) running databases and applications, but now I’m thinking on how to improve our solution of deploying servers on clients, so I have a few ideas and want to listen to some more opinions before rushing and trying configuring stuff.
Please feel free to give your opinion on what is good, bad, or could be done in another more efficient way, the only point that we can’t increase much the cost our server deployment strategy, remember, mostly of your clients are small business (5 to 20 “computers” on the network)
The most critical problems I see right now and that I think could have better ways of being implemented:
WTS - Uses windows remote desktop to access directly the client server, I don’t know if it is secure, I’m kind of “off” on windows servers for last few years (one a few clients he does have a jump box / bastion host, but still uses WTS to it, and then WTS again for the server).
Shared folder structure - right now he does install windows server (virtualized) and create the folders to be shared (like manager, financial, operational, and others) on the same partition and just share the folders (using AD on the 5+ employees business), then he uses cobian to backup the shared folders to a NAS, and do a VSS backup to the NAS as well, this VSS backup become “huge” (500 gb+ backup on HDD is a pain to restore) because it will backup all shared folders with it. Splitting windows and shared folders on two partitions would create a real benefit except from making the VSS backup smaller (I believe the main win here is fast system recovery)? We also have a cloud backup of the shared folders content to dropbox or onedrive or google drive (the clients used to choose it) but I’m already planning on migrating this to backblaze (becoming re-sellers).
Virtualization - right now this virtualization approach he created if not very useful (at least to my understanding), just one more layer of software, I’m thinking on ways to improve that, I don’t know if it is possible yet, but maybe creating an openstack private cloud and connecting all the VMs there (of course after migrating the clients hypervisor to Cent OS 8 + KVM), I would really enjoy a centralized way to “view” all the VMs status, and implement maybe a cloud backup of the whole VM to backblaze, like I said, it is just a idea, don’t know if it is actually doable and any good.
I’m all ears now.
Thanks.