How to pass through both the graphic cards that I have on two different virtual machines and one only PC

Hello to everyone,

I need some help to achieve my goal. What I want to do is to create two different qemu-kvm virtual machines on Ubuntu 20.04 assigning to each of them some of the resources of one only PC. I want to distribute the resources of the PC to the VMs like this :

a) vm 1 : should have 1 kinect 2 ; one monitor ; 1 graphic card (nvidia geforce rtx 2080 ti)

b) vm 2 ; the other kinect 2 ; one monitor ; the graphic card n.2 (intel UHD graphic 630)

The problem that I have is that the intel UHD graphic 630 is attached to the monitor that I use to manage ubuntu 20.04 and when it is captured by the vm,it seems that both the host and the guest os are freezed. Below you can see whats the address :

now I’m going to explain how I have configured the passthrough of both the main graphic card that I have,the RTX 2080 ti and of the iGPU :

root@ziomario-z390aoruspro:/home/ziomario# lspci -nn | grep 01:00.

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] [10de:1e04] (rev a1)

01:00.1 Audio device [0403]: NVIDIA Corporation TU102 High Definition Audio Controller [10de:10f7] (rev a1)

01:00.2 USB controller [0c03]: NVIDIA Corporation TU102 USB 3.1 Host Controller [10de:1ad6] (rev a1)

01:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU102 USB Type-C UCSI Controller [10de:1ad7] (rev a1)

root@ziomario-z390aoruspro:/home/ziomario# lspci -nn | grep 00:02.0

00:02.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics 630 (Desktop 9 Series) [8086:3e98] (rev 02)

  1. /etc/modprobe.d/blacklist-nouveau.conf

blacklist nouveau

options nouveau modeset=0

blacklist nvidia

  1. /etc/modprobe.d/vfio.conf

options vfio-pci ids=10de:1e04,10de:10f7,8086:3e98

options kvm ignore_msrs=1 report_ignored_msrs=0

options kvm-intel nested=y ept=y

softdep nouveau pre: vfio-pci

softdep nvidia pre: vfio-pci

  1. /etc/modprobe.d/nvidia.conf

softdep nouveau pre: vfio-pci

softdep nvidia pre: vfio-pci

softdep nvidia* pre: vfio-pci

softdep xhci_hcd pre: vfio-pci

softdep snd_hda_intel pre: vfio-pci

softdep xhci_hcd pre: vfio-pci

softdep i2c_nvidia_gpu: vfio-pci

  1. GRUB_CMDLINE_LINUX_DEFAULT=“quiet splash intel_iommu=on”

  2. update-initramfs -u

7.1) update-grub

what’s missing ? I don’t know why,but when I launch the vm that have assigned the address of the iGPU,the screen becomes black and I’m not able to use the windows 10 vm that should have captured the graphic card. There should be some kind of conflict that I’m not able to understand.

PS : I have also enabled the automatic login on windows 10,because I’ve thought that as soon as I run the windows 10 vm,it would have captured the iGPU and then I would have needed to automatically login inside it,because I would have lost the chance to see what happened on the ubuntu 20.04 host os. What happened,instead,is that I can’t use both,the host os and the guest os.