Recommended Server(s) for XCP-NG Production

I am looking to upgrade a Single Microsoft Hyper-V VM host & Single VM Ware host to a new Redundant XCP-NG Setup in the future.

I know I need 3 or more Host to have HA mode with Shared Storage network.

There are obviously budget limitations, so I need to be wise in my configuration and Specs of the Severs and not over do it, but we really want redundancy/resilience along with performance, and flexible upgradeability to be setup for future growth.

What would you all recommend going with these days in terms of Hardware?

Our current Primary Server is a Single Dell EMC Server with a Xeon-Gold 6132 CPU

  • 14 Cores and 28 Logical Processors
  • 128GB of Ram.
  • 2TB of Raid SSD (SQL)
  • 4TB of RAID HHD (File Servers)

Running:
Windows SQL - ERP/MRP On-Prem Business Software
Windows SQL - CAD Design Software - with basic File sharing for these Files (15 MB average CAD File size)
Other severs include: Basic AD, File Sharing, print severs, etc.

I then have a separate VM Ware Host for some other Smaller/Older Servers hosting things like Network License key Servers, Print Servers, legacy accounting historical information.

The move to a new setup would be consolidating the Hyper-V and VMWare hosted servers to one Hypervisor type (XCP-NG), along with adding approx. 10 Virtual Desktops for some Remote Worker access to Basic ERP and 2-D CAD Data, nothing graphicly/CPU/Memory intense.

My preference would be having:

  • 3 - Identical XCP-NG Severs, setup for HA Failover
  • 2 - NAS Disk/Image Storage devices in Replication/HA mode
  • Ideally separate 25G Storage Network and 10G primary network interfaces.

Is this overkill?

I am currently using Synology RS-3621 NAS(s) for both Sever &Desktop Backup and using Hyper-Backup to both an offsite office and a Cloud Storage provider. So I am familiar with Synology over TrueNAS, but not opposed.

Any Server Configurations you all recommend that strike a good value on Performance, Value and reliability/redundancy?

I think the compute nodes are where you want to be which allows you to expand as your company grows. One concern I have is your backend storage. What do you plan on using for that? You’ll need something that has good IOP’s. Something like a TrueNAS (I know its not preferred based on your post but… just thinking out loud) with a zfs pool with SSD’s for your SQL servers. Please buy appropriate high endurance drives for SQL. :slight_smile:

Then create spinning rust pool for your shares and other miscellaneous VM’s. I would setup the shares on TrueNAS itself that way you can do snapshots of the datasets, have shadow copies, ZFS replicate and so on.

The reason I like ZFS is because of data integrity. It is a copy on write filesystem and you can implement read (From RAM or L2ARC) and write caches (LOG) to help with performance. There is a whole plethora of documentation out there on how solid and resilient it is. Not trying to talk you out of synology and I think synology is a great product and has great features. But when it comes to data, I have full confidence in ZFS.

Obviously you could probably accomplish the same thing in synology. Separate the raids SSD and spinning rust, setup shares, setup snapshots, iscsi targets and so on.

Just a few ideas :slight_smile:

1 Like

Synology makes some nice models such as the SA3400D that have dual controllers with one disk shelf which saves you from having to buy twice as many drives.

Of course with most Synology Rack station models you could also buy a pair and put them in HA Mode. Synology works fine with XCP-ng but do note that if you use iSCSI it will be thick provisioned.

Three SuperMicro servers, choose based on processor and form factor. I bought three 1u with Xeon Silver (10c/20t) to start knowing I could change the processor when they got less expensive if I needed more cores. Buy the biggest RAM modules you can afford, you’ll need them in pairs. This leaves open slots for expansion. I put a pair of 64gb modules in mine, leaving a lot of slots available. Also remember that you’ll need a pair per processor, if you get a dual processor board that’s 4 modules. Yes you can run with a single module per processor, but you lose some speed. At the time 64gb was a good price per size ratio, the 32gb modules were not a lot cheaper to where four of them would be an advantage pricewise.

The 10gbps cards are no problem, but I haven’t shopped 25gbps cards from them. Needing both of these might push you into a 2u to 4u server chassis for ability to use more cards. Cooling would be better in the bigger chassis so maybe not an issue.

If I had infinite money, I’d put a flash based Truenas together, price it directly from ixsystems, they have pretty competitive pricing. Also note that SuperMicro gets pretty good pricing on drives, they ship a lot of them in premade servers so they buy in large volume. And they generally only deal in enterprise grade stuff, which is what you want. Just incase you want to build your own.

You might also think about how you will handle the heart beat for HA, maybe the same storage server, maybe a separate slow server. HA heartbeat does not need fast or big, use it to share the ISO files for the XCP-NG system (again not needing fast or huge). My lab had no problems with the heartbeat on a gigabit connection to the storage, I wouldn’t worry too much about this one aspect.

1 Like