Looking for SSD Partitioning advice for new Docker Server Build

I’m looking to build a new server to run my home lab/“home prod” Docker containers (HomeAssistant, Plex, Unifi-Controller, LibreNMS, Photoprism). I haven’t fully decided on whether I’m going with Debian on bare metal vs. something like Proxmox or XCP-NG. As it stands right now, everything I use can be done in a Docker container, but I’d love the ability to spin up a VM, if desired.

I seem to be a bit stumped on how to actually lay out the storage system to properly leverage my storage hardware and was looking for some advice…

My motherboard can accept 1x M.2 PCIe Gen4 x4 SSD, and I have a 2TB Samsung 990 EVO PRO to use for that slot. I also have a few other SATA 2.5" SSDs available, sizes ranging from 64GB-1TB (none of which are new)… My motherboard can also accept a Gen3 x2 SSD (which seem to be a rare breed… although I could run a Gen4 x4 (256GB) or Gen3 x4 (500GB) in that slot, but limited to Gen3 2x… meh.

The question is, should I use one of the old/junker SATA SSDs as my boot drive + root partition for the host system, and keep my docker container and/or VM storage location (e.g. /var/lib/docker) on the 2TB SSD?

What about data mount locations I am mapping (such as a few hundred gigs of Plex metadata), would those be best served to be on the M.2. SSD as well? The obvious answer seems like Yes, but if I do this, I’d probably want to partition the M.2 SSD which may hinder performance if I use it for /var/lib/docker (I would no longer be able to use it as a block device for instance).

If I go with Debian on bare metal, should I just put “/” directly on that high end SSD (inclusive of /var/lib/docker) and not have to worry about installing a clunker SSD? (I’m fearful of things like logs eating my previous TBWs on my expensive M.2 SSD though, which is part of the reason I am leery of Proxmox in general)

Hardware is:

Asus P12R-E/10G-2T Motherboard
Xeon E-2388G w/128GB ECC RAM
M.2 (PCIe Gen4 x4): Samsung 890 EVO PRO 2TB
M.2 (PCIe Gen3 x2): [Have 256GB Gen4 x4 or a 500GB Gen3 x4 drives I could use]
(Random SATA 2.5" SSDs I could use: 64GB-1TB in size)
(Random, used, 1TB-3TB SATA or SAS HDDs)

Not sure I have a solution for you, however, I run Proxmox and having vms are super handy. I have a couple of Docker applications running, however, Proxmox comes with LXC Containers which you might be able to leverage instead, not sure what the precise difference is but look kinda similar.

Having Proxmox makes it super easy to back up VMs which is probably something you’d want too.

I have a similar setup, root is on m2 pcie with some data on ssd and backups on a hd. Most of my containers are on the m2 because the container themselves are tiny, but I did put a few on the ssd by way of soft links. Those containers were kind of log heavy (two Unify controllers & home assistant). I don’t use docker for my containers but I am sure you can simply link the folder in /var/lib/docker/some_container just the same. None of my containers hold data, so my jellyfin data can be on any storage medium the host sees and I pass through the mounts to my container. My jellyfin container is currently on the m2 while the data is on the SSD, and backups on the spinning disk.

Go with straight Debian. You can easily run VMs without a GUI. GUI’s are a crutch for most people anyway. With that said, I would consider using the virt-manager app to manage my VMs again if I had to spin up a windows vm. It served me well for many years when I used it.