Hey Folks,
I wondered if someone that’s smarter than I am on this topic could help. Looking to set up a xcpng setup on 1 MS-A2 initially, with shared drive space in an HL15 with 15 Mechanical Drives, and 2x 4TB NVMe drives connected via OCuLink-to-U2 NVMe adapters. The PCI 3.0 motherboard seems like it will support this configuration, and I was considering running this in a RAID-0 Setup.
The storage server’s 15 mechanical drives will be for backup storage from the VMs but also for a cloud backup service that I run for friends and family. I was thinking the NVMes would be for VM storage, but here’s what I’m thinking. With the HL15’s PCIe 3.0 speed cap, I’m thinking that the MS-A2 with the 2x 4TB NVMe drives will make the VMs more performant. My question is- can this storage be ‘replicated’ to the NVMes in the HL15? I’m sort of looking to almost add the HL15 as a replication partner of the MS-A2 for this VM Storage.
Am I thinking about this all wrong? None of this will matter for the few light Linux VMs that will run on there, but there will be a couple of Windows hosts that have historically run a little slow on my aging ESXi infrastructure. I’d like to juice the performance of the VMs themselves.
I should add- I was planning to run whatever OS comes on the HL15 unless removing it and putting something like TrueNAS on there would be preferred. (I have a forum post linking to a video Tom made about xcpng best practices bookmarked, and I will reference that too as I set up.). I will either have 10Gbe or 25GBe DAC cables connected from the MS-A2 to the HL15 directly (if that’s even possible, or via a 25GBe SFP switch in between.)
This will be my first foray into the HL15, xcpng, and the MS-A2. The HL15 and MS-A2 and drives are already purchased. Once I have all of the VMs migrated off of ESXi, I will repurpose that slower machine and add it to the cluster.. maybe just to run the XO VM, and provide for somewhere to move VMs to for maintenance. I plan to add a second MS-A2 with the same configuration later this year, and have that 3 PC cluster with the shared storage take over the day to day.
Open to any ideas anyone has here, and thank you for your input!