ZFS server in an LXC with SATA passthrough (proxmox)

When virtualising I understand it’s best to give the guest (TrueNAS, OMV, etc) full control of the SATA controller (PCIE passthrough) so it has full control of the disks.

I know I can passthrough to an LXC in a similar fashion, but I don’t really understand the nuances of any differences. i.e. in a VM the full PCI device gets passed through and hidden from the host. I’m not sure this is true in an LXC.

I want to move away from OMV/TrueNas and use Houston. Ideally I want to do this in a lightweight Houston LXC contrainer.

It’s best to give ZFS full access to the disks; not necessarily the guest. From your examples, I’m guessing you’d like to setup a NAS. With LXC, I let Proxmox be in charge of ZFS and bindmount a directory for the guest to share.

Containers don’t replace the host OS, so you aren’t passing anything directly through to the guest “OS” b/c the guest OS is really the host OS. The host OS is more than just a little important in a container environment.

I don’t use any of those products for my containers, but after a brief lookup of what Houston is, I think Houston would replace proxmox. Not sure why you couldn’t just configure the disks in proxmox and then bind it to whatever guest system you use for sharing. Like what tvcvt said.

Setting up ZFS on proxmox was what I was initially thinking about doing, but the reason I liked TrueNas, Houston, etc. is that they had a simple UI for managing ZFS which proxmox doesn’t have.

I was hoping to avoid a full on VM and perhaps just allow the LXC guest to run ZFS. I can’t imagine it would be safe to setup ZFS on proxmox then try manage it from an LXC, but maybe I’m wrong there?

sure I get that part , but you can pass devices to the LXC,I’m just not sure how isolated they are given they are not isolated in the same way as a VM. And I really don’t want any conflict between the guest and the host when it comes to ZFS as that will spell disaster.

Houston isn’t really a proxmox replacement (although I suppose it could perform this role). I plan to just use it as my “NAS OS” to manage ZFS and shares. In place of OMV/TrueNas.

It’s just whether I can do this safely in an LXC to manage ZFS. As @tvcvt suggested, I could host ZFS in proxmox then bind mount the container, but I want the ZFS management UI from Houston to control ZFS.

A VM is the safer tried and tested way to go it seems.

If you have a little extra RAM to throw at this and you want to get this project done and behind you quickly, then the VM route wins. But doing all this isn’t a simple/clean design and might be a little more brittle than you expect. But with that said, people do this all the time and we all learn by experience so go for it.

I hear ya! You could certainly try passing through an HBA, but I’d consider it experimental. Definitely something you’d want to test extensively before putting it into production.

One thing to think about: day-to-day ZFS management really isn’t terribly complicated from the command line. There’s a little nuance to creating the pool in the first place (like setting the appropriate ashift and encryption, for example), but those are pretty much one-off things you could look up, write down, and reference on the very rare occasion it’ll come up. Mostly you’ll be making datasets as you need them with zfs create. Set up sanoid/syncoid and you’ve got a strong snapshot and backup regimen.

Together these things all seem complicated, but it really isn’t that bad if you approached it chunk by chunk (plus you’ll learn a lot more about the underlying tech).

Thanks @liquidjoe / @tvcvt … passing through the SATA controller has never been a problem before. I’ve run OMV in a VM with passing through an HBA / SATA controller without any issues. OMV just has a very limited ZFS interface. I used to use TrueNAS but its overkill for my needs.

I have just discovered that ZFS supports delegation . I think this might allow me to setup my ZFS on proxmox, then delegate control to an LXC for administration.

https://illumos.org/books/zfs-admin/gbchv.html

I’ve never done this before, but it would likely be a lot more stable given the controller, disks and ZFS are on the host , and the management is delegated. No idea if Houston would be happy with this … more investigation required.

Yes, passthrough is rock solid in KVM. I’m not quite as confident in LXC.

One gotcha of delegation you should check out is that it’s nontrivial to get a delegate to be allowed to mount a new dataset. That can really rain on your parade unless you can figure it out.

Is that in an unprivileged container (i.e. requires user mapping) or is there more to it?

More to it, unfortunately. The mount privilege is tricky to get working on bare metal even. There are security implications to letting a non-root user mount file systems (like what if they mount over /etc/shadow or over /sbin), so you’ll have to figure out what hoops to jump through.

Okay, did a bit more reading and this is a rabbit hole I dont have time for … a VM it is for now.

I haven’t seen a video from Tom on ZFS delegation but it looks like an interesting topic for containerisation.