I only recently noticed there are Docker containers of some popular Linux distros (such as Debian, Ubuntu, openSUSE [Leap & Tumbleweed], & Arch).
What would the pros & cons of using each.
For example, right now I have a Debian VM running Pi-Hole with Unbound. Depending, I was thinking to recreate that in a Debian container.
Another potential use for me would to play around and learn a different distro (I’m used to Debian, so I was thinking of trying out openSUSE Tumbleweed).
I’m familiar with VMs, but still relatively new to containers. I only have 2 running so far (UniFi Controller & Portainer), but I am planning on spinning up about another 6+ containers (assuming I’m able to get them configured correctly).
One of them will be containrrr/watchtower to automate the update update of all of the containers.
some apps are more comfortable with debian based os, especially in the dependency issue.
a small example is you want to build handbrake, using apt-based you pretty much can fire a one line to install all dependency when in rpm/dnf-based distro you need to add more repository first, thus adding more layer and it will ended with bigger container size
other than size consideration, there is the system library like using glibc or musl (like alpine linux), if your app can run using musl and speed is not an issue, why not using smaller base image?
when you just run one or two container, 1GB container size might doesn’t feel that much, but when you have to deploy it in swarm or helm/kubernetes for 100 instance, boy that would be 100GB total. if alpine-based image offer way-way smaller size people usually prefer to use that instead.
Not really answering my question, but still some good info nonetheless.
As for your first point, I have also noticed some apps are only packaged for Debian-based systems (only deb files, no rpm files).
I am going to try hard to not be bias. I used to be a guy who did everything on VM’s until I got my hands on docker and really digging in deep with it. Now I try to put everything in a container if I can. But with that said, here is my experience with both.
- They do take up more to the physical servers resources with higher overhead
- Take a while to install the OS for a new service function (Not fast on deployment)
- Running into issues of installing software if following instruction involving human error
- Automation of patching services and OS might be a drag unless you are good with ansible or something similar
- Online installing script could be scary if you don’t really know what they are doing
- More comfortable with working with a full OS
- The linux Snaps are getting pretty good about installing quite a few services
- Limited knowledge of how it works might lead frustration
- Out of the box you would expose your services as ports on the same server IP address unless you get involved with with the different networking settings you can use on your containers (MacVLAN, IPVLAN, bridged etc.)
- Software may not be supported as a container
- Consistent successful/reproducable deployments without human error
- Faster deployment
- Persistent data for containers is good whenever you want to save all the configurations and you accidentally blow away a container. All that needs to be done is to point the persisting data back to a new container and its like nothing happened.
- Automation using watchtower to keep all containers up to date and using docker compose to build full config files for an entire service. Services that needs multiple containers working together for a complete service to function. something like graylog that needs a DB and a web frontend, for example
In the end I think it might be personal preference on how you want to handle your environment. If you are old school and want VM’s then go for it. If your mindset is more on automation and spinning things up quickly to have some fun with then docker is the way to go.
The way I see it, back in the day there were physical servers that needed to be deployed by hand and then came virtual machines. Then when there were virtual machines there came containers. I think containers are the way to move forward.
Thanks for the extensive reply.
I don’t know anything about IPVLANs yet, but it appears (on Portainer at least), macVLANs requires a physical adapter for each. And I’m still having some issues getting it to work properly.
Also, since xcp-ng is only recognizing one of my physical 2 adapters (the Intel one), it means i can only have 1 macVLAN
I have downloaded the Watchtower image, getting ready to deploy it.
And multi-container services (sich as Zabbix) are taking me a bit longer to wrap my head around to get setup properly.
Speaking of Ansible, that’s another thing on my list to learn. After I solve my current issues.
I made this docker compose file a long time ago for building a zabbix environment. You might need to change images to the newer zabbix version but maybe this will help you get started and help you understand what is going on
command: mysqld --character-set-server=utf8 --collation-server=utf8_bin
- ZBX_HOSTNAME=Zabbix server
This will give me a perfect opportunity to try out the Portainer Stack function.
I just tried your docker compose script, only changing the Zabbix versions & the database passwords.
For example: Changed
zabbix/zabbix-server-mysql:latest, since the documentation mentions
latest automatically choose the alpine version.
The issue I am having is I’m getting a database error when I try to open the web ui.
Sorry. I should have been more descriptive. That builds the environment but you also need to populate the database by running a command on the MySQL container.
That docker compose file was me messing around a while back trying to stand it up myself and I manually did the configuration.
The best thing to do if you want a complete setup with no intervention is to follow the blog post. Your setup would have worked if you populated the DB
No worries. I thought I must have been missing something.