Hi everyone, here is some input first:
I manage IT in a small group of hotels and we have been using VDI since 2012 as our primary workstation solution for the company’s employees. Me and my team have been using Hyper-V for the past decade in a failover cluster and used thin clients with LeafOS for our user workstations that connect through RDP to each employees own Windows 10 VM. On sites we use up to two hosts, we use SAS cables on the storage array. If they are more we use iSCI.
However, while we still prefer to remain on the VDI side of the implementation, as the investment cost to replace everything would be too high to do all at once, we need to re-design the whole thing as we were lacking performance wise.
We also have our own datacenter in one of the premises and all premises are interconnected via private 10gig fiber, so it’s essentially a big LAN between all hotels.
I can understand that a user connecting to their own VM might not be equally fast and responsive as a physical machine, but we are looking into getting very able hardware that can possibly reach the performance and user experience of a physical PC. Video capabilities were also very poor and as Teams/Zoom calls became more of a thing, we would need better graphic intensive workloads so we also would like to look into Nvidia gpu for VDI for example the A16 model.
So, since everything is on the table, i would appreciate some pointer on how to design the requirement so i can reach out to vendors and get more accurate quotes. However, in my country VDI knowledge is very lacking in almost all vendors, so it would be better if we know exactly what we are asking and set it up ourselves and just buy the hardware and licenses.
As far as the hardware goes:
-Physical hosts: Any Dell/HPE/Lenovo servers with AMD EPYC cpu (more cores the better, right?) with around 512GB ram, 10GB+ networking (possibly 25 or 50 NICs), Nvidia A16 GPU and SSD drives
-Storage: I am starting to concider if ditching the storage array (which is a single point of failure) for internal drives on the hosts and using some sort of software clustering method (like Storage spaces direct S2D or something equivalent)
-Hypervisor Platform: As i mentioned, we are used to Hyper-V but I am not sure how this will limit our end result, since we need more advanced things, especially for vGPU on VMs. So seeing that Tom is also a big fan and supporter of XCP-NG with XO and Proxbox, i would also put those on the table. I know that Citrix and VMware are probably the most complete products right now, but reading all those bad comments about their practices, kinda puts me off.
Cost is not a big problem here, i have the support of the board members of our company, but it would be best if we first get a single host as proof of concept to try it out and then deploy everything else.
Let me know if you need any other information that will help. Sorry for the long post but this is an exciting (and expensive lol) project, so i want to nail this and everyone can be happy (both the user that has a solid workstation and the IT team who will have new toys to play with).