Thinking of spinning up Win10 VM on XCP-ng. Anyone with experience try GPU passthrough on Dell R720?
I know it is supported, but I have never done any testing with it.
I run ESXi but I would assume similar things would apply to XCP-ng. One thing you will want to do is look up experiences people have had with the GPU you want to use pass-through on. Some GPUs and sometimes certain drivers for those devices on Windows will cause issues due to how the device starts up on receiving power. I’ve seen certain driver versions be an issue which part of me assumes is a way Nvidia pushes people to buy their more expensive GPUs for virtualization.
I haven’t specifically done GPU Passthrough, but I have done virtual GPU allocation using NVIDIA Grid cards on Citrix XenServer on Dell R720s. Any specific questions about it?
I would like to spin up a vm running win10 and try gaming over a 10gbe LAN connection with a Radeon Pro WX GPU. Aside from that I also want to do some CFD modeling with open source software.
I don’t have a Dell R720 but I run dual E5-2687W so it’s close enough a platform I guess. Afaik:
As long as the CPUs have
VT-d, you’re fine for virtualization on Sandy Bridge EP.
IOMMU groups are the thing to watch for regarding the motherboard. I’d assume popular enterprise-grade virtualization servers (like this Dell, Xen-certified) are fine in that regard, or at least work well with the ACS patch for the Linux kernel. E.g. on my ASUS Z9PE-D8 WS it’s incredibly granular yet it’s only prosumer hardware.
AMD GPUs won’t give you much trouble regarding Windows drivers once the
vfio-pcipassthrough is working fine at the host (kernel hypervisor) level. [NOTE: with nvidia cards you need to “trick” the guest into not knowing it’s a VM, otherwise it won’t accept drivers whatsoever because it’s a paid feature on pro cards).
Some AMD GPUs however suffer from a “reset” problem that may totally defeat the purpose of vfio passthrough (requiring to reboot the host as well every time you want to reboot the guest in order to keep using the GPU), so google that maybe (GPU NAME + e.g. “vfio passthrough reset bug”)
For the very specific case of GPU passthrough, I don’t know how savvy you are with Linux but regardless I’d really suggest Arch Linux as a host/hypervisor because of their awesome documentation which is always up to date and works every time for me. Meanwhile, Ubuntu and Fedora are good platforms inherently too but you may find yourself googling much more every step of the way because “reasons” (like AppArmor, or RHEL-specific stuff).
systemdis basically all you need to tame libvirt/QEMU into a bunch of services, for a clean and efficient system administration.
libvirthas everything out-of-the-box (NAT for internal virtual network, or macvtap for LAN exposition of guests which is my preferred way, and it’s easy to configure). You can give a macvtap to the host as well for its own LAN access and gateway.
Regardless of which way you go, here’s by far the most comprehensive article on this topic afaik: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
I have a bunch of personal notes on this, it’s a bit of mess because I’m in the process of drafting it and will only later make it into a nice-n-clean wiki-style doc, but if you need help now, I can share the links (it’s just a bunch of markdown docs on github, covers Arch, Ubuntu 18.04 and Fedora 28).
Feel free to ask more if you have questions.
On my Dell R720, i have tried nvidia Quadro and Grid cards and XCP=ng Center will not conncet to the server “connection refused”. once i remove the GPU it connects just fine. XCP-NG would not see any NIC if i Installed XCP-NG with the GPU already installed in the server. very frustrating so far and not much out there in other forms showing any solution other than dont run a GPU in the server.
I’m also curious, i want to put a Gigabtye GTX 970 G1 in my R720 but it feels like the PCB of the card is too wide? Just not fitting right. Wonder if you must have a Blower style card for a good fit?