Management of Docker containers in fully virtualized business environments?

What is the best practices, state-of, etc for running docker containers in fully virtualized business environments?

In the homelab environment: Setup a ubuntu docker host, install portainer, and run your docker containers on one VM.

In the professional environment: ?Photon OS? what about docker management? what about vmotioning or backing up docker containers? Are there any security practices/concerns… docker container isolation?

In production you are looking for a kubernetes cluster. This has the migration and management you are looking for. I used RKE2 with rancher for the gui management.

Kubernetes is a whole new skill set. Unless you are prepared for it, I am not sure I would jump straight to Kubernetes. What kind of SLAs do you need to meet? Are there any minimum RTO/RPO specifications? All of those data points will help you decide on the best solution. Also, I wouldn’t bother with backing up docker containers, just their persistent data.

I’m setting up Docker for internal business apps like Paperless-NGX, Immich, and Odoo, all web-based and not mission-critical.

Would a Debian + Docker + Traefik (or Nginx Proxy Manager) stack be solid for a small internal setup, or would Docker Swarm / lightweight Kubernetes make more sense?

I’m also working on hardening Docker, just learned it overwrites iptables on Debian, which opened up a security risk. I’m planning to try binding it, but not sure if that’s actually best practice. Curious how others handle that.

I think Debian, Docker and a proxy of your choice is fine. What would you gain by running it in swarm or K3S? I mean, you get more uptime/high availability if your nodes are all on separate physical hosts, but not if they are all on the same machine. But even then, if you don’t have an HA network, then what’s the point? To me, the more important thing is data security/backups. I run a Synology NAS with a NVME read/write cache, and it is connected to my servers via a 10gbe network. I don’t store any data on my servers. All of my docker compose files use the Docker NFS driver, and all of my persistent volumes are stored on the NAS, in a RAID 5. The NAS backs up daily to a local USB drive as well as two different cloud providers (Synology C2 and Cloudflare R2). My synology is on a UPS, and the UPS is plugged into an Ecoflow Delta 2 power station, which is plugged into the wall. I have a NUT server set up to shut everything down gracefully if the Delta 2 runs out of power and the UPS goes on battery. As a result, I am very comfortable with my data storage set up, and it is plenty fast with the nvme read write cache and 10gbe network.

If I have a failure in my Proxmox server, or my Debian VM hosting my docker containers, it is not an issue to restart or even rebuild those from backups.

With regard to hardening docker/iptables, I don’t run a firewall in my docker host. I have firewalls on the Proxmox host, as well as pfSense as my router/firewall at the edge of the network. I think making sure your overall network is secure (VLANs, good firewall rules, pfblockerNG-devel (or other outbound IP blocking), and Crowdsec are a good start. You can control lateral movement with firewalls in Proxmox.