TrueNAS: Deploying in VM or on bare-metal?

Hello,
I have a server with ECC RAM and multiple disks, means it’s equipped like a NAS.

However, I intend to install Proxmox virtualization platform and run a VM with TrueNAS Core .
All relevant disks (e.g. WD Red) will be configured as passthrough for this VM.

Then in TrueNAS Core I will configure these drives for ZFS.

My question is:
How can I then utilize the storage provided by TrueNAS Core most efficiently w/o any latency caused by network stack between the hypervisor KVM and the VM in Proxmox storage for ISO, images, etc.?
Or is this setup suboptimal in terms of storage performance and one should consider to install TrueNAS Scale on the server and deploy VMs with the same hypervisor KVM (included in TrueNAS Scale)?

THX

I don’t recommend virtualizating TrueNAS as it adds more complexity and potentially more issues. Also, using TrueNAS as a hyoervisor is not going to be full featured as Proxmox.

2 Likes

I’m aware that TrueNAS hypervisor is not a fair comparison to Proxmox.
On the other hand side I would only need a Win10/11 VM to be deployed.
All other stuff would run in containers / Kubernetes.

However, I have seen that TrueNAS Scale is more like an appliance, means CLI usage is not intended in this OS although it’s Debian.

My “ultimate” goal is to configure diskless boot, but for this I would need full OS access w/o restrictions.

The best part about about open source is if there is not appliance available to fit your needs then you can roll your own custom solution. :slight_smile:

It isn’t ideal to run TrueNAS in a VM, however if you only have 1 server and you need VMs, then you don’t have much of a choice.
TrueNAS offers easy AD joining and share management, which makes it convenient as a file server, so I wouldn’t hesitate to use TrueNAS in a VM, you just don’t get the self-healing, bit-rot protection and all that stuff from ZFS.

The main issue I’ll see is “when I reboot my Proxmox server, what happens to the attached storage from TrueNAS”. I’ve tried a similar test in VMware and it doesn’t handle it that well. It’s likely though you could create a custom startup script for Proxmox that starts the VM, waits for a set amount of time, and then tells it to rescan the storage to attach any shared storage from the TrueNAS VM, but this is a bit roundabout when Proxmox has ZFS support built in.

That being said, you’re unlikely to notice a large performance hit if you’re using spinning hard drives if you use it in this way. Ways to reduce performance loss from the hypervisor layer would be core pinning, statically assign memory, and to PCIe passthrough the storage controller/HBA that the drives you wish to utilize.

In Proxmox you can set up the startup order and a startup delay for each VM in the UI. I start the TrueNAS VM first and the VMs that need access to it with a 60 seconds delay. Never had any issues with a VM not being able to access the storage provided by TrueNAS after a reboot.

I had a great working setup in proxmox with physical disks passed through (not the controller) and running truenas… I did manage to passthrough the physical disks in xcp-ng to truenas but the drive id’s are different and truenas doe not see the zfs pool I had on this 16TB mirror.

Not sure if there is any other way in truenas/xcp-ng to to correct this but otherwise I’ll be forced to go back to proxmox.

Btw… the data on this zfs pool isn’t life threatning critical as I have an external backup as well so I’m willing to experiment with it in VM truenas as long as it keeps telling me there are zero errors.

That is if I can get it to work in xcp-ng

Then why did you switch to XCP-NG when everything worked perfectly?

I think Proxmox is the better choice for home users and small to medium businesses. The only advantage of XCP-NG that I see in such environments are the delta backups, which in Proxmox are only available in combination with the Proxmox Backup Server. Many of the other “advantages” that Tom mentions in his videos, such as the separate management VM, only play out in very large environments with many servers and VMs and are in fact rather a disadvantage in smaller environments or on a single server.

Proxmox also has a lot of nice features that accommodate home users and small businesses, like VDI functionality out of the box with the integrated SPICE proxy or better passthrough support, to name two of them.

I had an SSD crash due to my system halting because I tried to passthrough something else… This corruped the SSD proxmox was installed on and so I decided to give xcp-ng another go… I got it working now btw… ZFS pool is back on line… passed through my nvme slog/cache partitons the right way and zfs liked that…

And I would like to do more with xcp-ng since well… Tom seems to cover a lot about it and is an excellent source of information and I like the videos. I feeel most of my problems with xcp-ng are just because of not doing things the right way. I’m learning still hehe.
:slight_smile:

Trying out more things and learning are of course perfectly valid reasons. :slight_smile:

Actually I think the optimal setup when evaluating NAS vs. hypervisor usage is to use Proxmox with ZFS configuration.
The key function of a NAS, providing network shares, can be deployed in Proxmox with a LXC that offers Samba, NFS, etc.

In my homelab I identified that hypervisor functionality is more critical compared to NAS.

Until now I don’t know functions provided by TrueNAS that couldn’t be “rebuild” by open source software (except for the fancy WebUI).

Other VMs accessing TrueNAS wasn’t what I was referencing, it’s TrueNAS as a VM providing storage to it’s own hypervisor host. What you described will work perfectly well as long as the timing works out so TrueNAS has booted up prior to the other VMs :stuck_out_tongue:

Ah ok. I never used it like that. All my VMs are running on a local ZFS pool built with SSDs. In addition to that I have 8 spinning disks attached to an LSI controller which I pass through to the TrueNAS VM.