Making the best use of my not-yet-built build… (NAS, AI-server, xcp-ng host)

Planning (actually decided for all parts, just waiting for them to arrive) a server build that is based around the Jonsbo N5 case (which takes 12x3.5” and 4x2.5” drives) and ASUS ProArt X870E-Creator main board.
The other parts:
CPU: AMD Ryzen 9 7900 (to make the most use of the two x16 PCIe slots on the main board)
RAM: 2x64GB DDR5 (board takes 256GB, so I will have two slots free for future expansion)
GPU: ASUS Prime GeForce RTX 5060 Ti 16GB (average for AI-stuff, good price/performance ratio, since I have two x16 PCIe slots, I will be able to add another card in the future)
PSU: Corsair RM1000x
HBA: LSI Avago 9400-16i with four(x4) breakout cables to attach the SATA-disks on the backplanes in the N5.
OS disk: (some) NVMe (not ordered or decided for yet), maybe another NVMe for local storage (have to read more in the manual for the main board what slots share the CPU and chipset buses)

So what’s the best option to use this build as a combined NAS, AI-server and xcp-ng host (in that order of importance, but to begin with just testing with some small left-over HDDs) ?
A: xcp-ng as OS, virtual Linux server for AI (with GPU pass-through), virtual NAS (with passthrough of the hard drives)
B: Linux as OS, local services for file sharing (CIFS, NFS), locally installed GPU drivers and Ollama + OpenWebUI (and direct access with Python) for doing local AI
C: Proxmox as ”main OS”, virtual TrueNAS for the NAS-OS, lxc container for Ollama and virtual machine for xcp-ng
D: NFS-OS (whatever it will be, probably TrueNAS).. then (?) virtual xcp-ng (?) local Ollama with GPU drivers (?)
E: any other suggestion ?

I’m not a fan of virtualizing truenas even though you are passing the drives through. I think it adds too much complexity.

If it were me I would have a xcpng host, setup ZFS as your storage repo and then setup VM’s based on your needs. So your ollama for AI, then cifs and NFS on another VM.

Or have truenas be the main OS and run ollama and openwebui in containers and pass through the GPU.