Choosing a hypervisor

Hi all,

I got a used Dell R720 with dual Xeon CPUs, 128GB of RAM and 2x 146GB, 2x 300GB and 2x600GB SAS hard disks.

I would like to use this as my VM server in my lab. Primarily I need it to run the Unifi controller, but I’d like to add som other VMs later on.

I’d like to put the disks in Mirror and use the 146 disk to install the hypervisor, 300 disk to install VMs on and the 600 disk would be a local file share.

So do you have any recommendations what hypervisor to use? As it’s a lab it has to be free, so maybe proxmox, xcp-ng or maybe ESXi?

What do you think is best and easiest to maintain.

Thank you.

I prefer XCP-NG which works great and can support that configuration.

1 Like

If I use the 146GB HDD for the hypervisor install the swap file will eat 128GB, will that leave enough space for it to install?

VMs themselves will be then on the 300GB HDD.

Whatever you pick keep in mind if you later try to migrate the VMs to another hypervisor it will be a pain.

Personally I run Proxmox, with a quad port NIC card, was tricky to set that up. The vms have been running on it without any issues. I’d say I haven’t found an easy/obvious way to create an internal network on Proxmox, for use in labs say, it looks easier on XCP-NG.

As I also use vmware workstation I use ESXi on another box, strangely the Networking looks more complete but I haven’t spent much time creating an internal network for again say a lab.

Thought my knowledge of networking was ok, so could just be my knowledge on netwoking within these products.

There are some limitations on ESXi free version but don’t think that will be an issue. Must admit I like the ease of moving vmware VMs, with Proxmox it basically stays there.

Try all three and see how it goes.

I’m using 120gb drives for my XCP-NG lab, they show at least 70GB available on each machine. The other storage for the VM’s is on a Freenas machine with NFS share, I did this so I could mess with HA on the virtual machines.

I recommend ESXi as well, especially it you plan on deploying virtual network or storage appliances for testing and learning. Also, it is the industry standard and if you plan to work with larger IT teams or support larger setups this is likely what they would be using.

I have a very similar setup, with 2 Dell R720s. I tried to get XPC-ng to do what I wanted, but it was too much of a job for me. I am going to go back to ESXi and vCenter, but I will only be able to run version 6.5u3, since that is the highest version of VMware the R720 will support. The EOL for 6.5 general support has been extended to October of 2022. This is a quote from the VMware site:

Today we are extending the general support period for vSphere 6.5 by 11 months, to October 15, 2022. Originally, vSphere 6.5 was scheduled to reach EoGS (End of General Support) on November 15, 2021. The original EoTG (End of Technical Guidance) date of November 15, 2023 still applies for vSphere 6.5. Posted Mar 26, 2021

I figure that I will purchase VMware 7 with a 1 year support contract, downgrade ESXi 7 to 6.5u3 but keep the vCenter appliance at 7. I will be able to make work what I want and it will run as long as I want to run it. At some point in the future, I’ll get another pair of more up-to-date Dell servers that will handle a more up-to-date version of VMware.


VMUG Advantage or VMware Essentials are cheap options for running licensed VMware products.

Dell using VMware as a cash cow to sell new servers, they have much debt to pay off.

OP, I would recommend using ESXi 6.7 with the free license as this is the best hypervisor out there and the easiest and most intuitive to configure.

Change those 2 x 300Gb HDD and 2 x 600GB HDD for 4 x (cheap 500GB) SSD and mount them in RAID10 for 1TB of storage that will be fast and quite resilient.

Networking and using it after that is a piece of cake for what you need to do.

All 3 hypervisors should be able to meet your needs without issue. ESXi may not allow the 600GB drive to be shared directly and may require you to pass the volume through to a VM to be served up across your network.

Since this is a lab I would go with a hypervisor I wasn’t familiar with so I end up learning something and expanding my skillset. Totally preferential though.

+1 for VMUG Advantage. You can use most VMware products in a homelab environment for a year for under $200.


My vote is for XCP-ng with Xen Orchestra all day long. Easy to setup, operate and maintain. XCP-ng is robust and has enterprise grade features and paid support is an option too.

We use Esxi at my employer’s but as the business owner won’t upgrade hardware we are becoming locked into obsolete software due to Vmware dropping hardware support. We have also had problems with some backup products and how they deal with Esxi snapshots. (Unitrends looking at you). In contrast Proxmox has worked for me at home pretty much flawlessly and i have only my Exchange server left to migrate off Esxi. I was also able to upgrade a win10 vm to win 11 under proxmox without issue. The one thing i would like to see proxmox provide is an easy way to re-number a vm and if you have multiple proxmox servers make sure you have a per server based vm numbering scheme.

XCP-ng has no hardware restrictions

1 Like

Thanks! But so far at least Proxmox is meeting all my personal needs and I am unable to convince the business owner to ditch Esxi 5.5. :frowning:

XCP-ng does have hardware restrictions. Hardware Compatibility List (HCL) | XCP-ng documentation

They say that it may run on unlisted hardware, but it isn’t supported and mileage may vary.

The big question being, which hypervisor doesn’t work? Everyone has their favorite for their own reasons, but is/are there product(s) that are just not ready? Maybe that would help someone narrow down their choices.

I’ve decided that XCP-NG is my choice for some of the reasons that Tom has spoken about in videos. Next would probably be Proxmox for me due to the large amount of community support (same for XCP-NG and Truenas).

Also that said, I do have 1 VM running on Truenas, in production, but it doesn’t do anything really important. My XCP-NG roll out is waiting for money (new servers needed).

Windows 11 has the requirement for TPM 2.0. Will any of the Hypervisors or Emulators (e.g. QEMU/KVM) support Windows 11 VMs on hardware that only supports TPM 1.4, or is this a lost cause?

I would guess that unless Microsoft changes policy for the VDI licenses, that it is a lost cause without hacking some changes into the installer. Those changes have so far been to replace some files with Win10 files, but doing this you are always just one MS update away from having a broken system.

Also considering that this is still beta, a lot of things can change between now and the next version. TPM has been a big complaint and they may decide to listen. Or there may be no upgrade path and you will only be able to get win11 with a new PC. Too early to really speculate too much.