Linux distro container vs VM

I only recently noticed there are Docker containers of some popular Linux distros (such as Debian, Ubuntu, openSUSE [Leap & Tumbleweed], & Arch).

What would the pros & cons of using each.

For example, right now I have a Debian VM running Pi-Hole with Unbound. Depending, I was thinking to recreate that in a Debian container.

Another potential use for me would to play around and learn a different distro (I’m used to Debian, so I was thinking of trying out openSUSE Tumbleweed).

I’m familiar with VMs, but still relatively new to containers. I only have 2 running so far (UniFi Controller & Portainer), but I am planning on spinning up about another 6+ containers (assuming I’m able to get them configured correctly).

One of them will be containrrr/watchtower to automate the update update of all of the containers.

Thanks

some apps are more comfortable with debian based os, especially in the dependency issue.

a small example is you want to build handbrake, using apt-based you pretty much can fire a one line to install all dependency when in rpm/dnf-based distro you need to add more repository first, thus adding more layer and it will ended with bigger container size

other than size consideration, there is the system library like using glibc or musl (like alpine linux), if your app can run using musl and speed is not an issue, why not using smaller base image?

when you just run one or two container, 1GB container size might doesn’t feel that much, but when you have to deploy it in swarm or helm/kubernetes for 100 instance, boy that would be 100GB total. if alpine-based image offer way-way smaller size people usually prefer to use that instead.

Not really answering my question, but still some good info nonetheless.

As for your first point, I have also noticed some apps are only packaged for Debian-based systems (only deb files, no rpm files).

I am going to try hard to not be bias. I used to be a guy who did everything on VM’s until I got my hands on docker and really digging in deep with it. Now I try to put everything in a container if I can. But with that said, here is my experience with both.

VM Cons:

  • They do take up more to the physical servers resources with higher overhead
  • Take a while to install the OS for a new service function (Not fast on deployment)
  • Running into issues of installing software if following instruction involving human error
  • Automation of patching services and OS might be a drag unless you are good with ansible or something similar
  • Online installing script could be scary if you don’t really know what they are doing

VM Pros:

  • More comfortable with working with a full OS
  • The linux Snaps are getting pretty good about installing quite a few services

Container Cons:

  • Limited knowledge of how it works might lead frustration
  • Out of the box you would expose your services as ports on the same server IP address unless you get involved with with the different networking settings you can use on your containers (MacVLAN, IPVLAN, bridged etc.)
  • Software may not be supported as a container

Container Pros:

  • Consistent successful/reproducable deployments without human error
  • Faster deployment
  • Persistent data for containers is good whenever you want to save all the configurations and you accidentally blow away a container. All that needs to be done is to point the persisting data back to a new container and its like nothing happened.
  • Automation using watchtower to keep all containers up to date and using docker compose to build full config files for an entire service. Services that needs multiple containers working together for a complete service to function. something like graylog that needs a DB and a web frontend, for example

In the end I think it might be personal preference on how you want to handle your environment. If you are old school and want VM’s then go for it. If your mindset is more on automation and spinning things up quickly to have some fun with then docker is the way to go.

The way I see it, back in the day there were physical servers that needed to be deployed by hand and then came virtual machines. Then when there were virtual machines there came containers. I think containers are the way to move forward.

1 Like

Thanks for the extensive reply.

I don’t know anything about IPVLANs yet, but it appears (on Portainer at least), macVLANs requires a physical adapter for each. And I’m still having some issues getting it to work properly.

Also, since xcp-ng is only recognizing one of my physical 2 adapters (the Intel one), it means i can only have 1 macVLAN

I have downloaded the Watchtower image, getting ready to deploy it.

And multi-container services (sich as Zabbix) are taking me a bit longer to wrap my head around to get setup properly.

Speaking of Ansible, that’s another thing on my list to learn. After I solve my current issues.

Cheers

I made this docker compose file a long time ago for building a zabbix environment. You might need to change images to the newer zabbix version but maybe this will help you get started and help you understand what is going on :slight_smile:

version: '3.8'

networks:
  network-zabbix:
    driver: bridge

services:
  mysql:
    container_name: mysql
    image: mysql:latest
    networks:
      - network-zabbix
    ports:
      - '3306:3306'
    volumes:
      - './zabbix/mysql:/var/lib/data'
    command: mysqld --character-set-server=utf8 --collation-server=utf8_bin
    environment:
      - MYSQL_ROOT_PASSWORD=carryontech
      - MYSQL_DATABASE=zabbix
      - MYSQL_USER=zabbix
      - MYSQL_PASSWORD=carryontech

  zabbix-server:
    container_name: zabbix-server
    image: zabbix/zabbix-server-mysql:alpine-5.0-latest
    networks:
      - network-zabbix
    links:
      - mysql
    restart: always
    ports:
      - '10051:10051'
    volumes:
      - './zabbix/alertscripts:/usr/lib/zabbix/alertscripts'
    environment:
      - DB_SERVER_HOST=mysql
      - MYSQL_DATABASE=zabbix
      - MYSQL_USER=zabbix
      - MYSQL_PASSWORD=carryontech
    depends_on:
      - mysql

  zabbix-frontend:
    container_name: zabbix-frontend
    image: zabbix/zabbix-web-nginx-mysql:alpine-5.0-latest
    networks:
      - network-zabbix
    links:
      - mysql
    restart: always
    ports:
      - '8113:8080'
    environment:
      - DB_SERVER_HOST=mysql
      - MYSQL_DATABASE=zabbix
      - MYSQL_USER=zabbix
      - MYSQL_PASSWORD=carryontech
      - PHP_TZ=America/Chicago
    depends_on:
      - mysql
   
  zabbix-agent:
    container_name: zabbix-agent
    image: zabbix/zabbix-agent2:alpine-5.0-latest
    user: root
    networks:
      - network-zabbix
    links:
      - zabbix-server
    restart: always
    privileged: true
    volumes:
      - /var/run:/var/run
    ports:
      - '10050:10050'
    environment:
      - ZBX_HOSTNAME=Zabbix server
      - ZBX_SERVER_HOST=zabbix-server
    depends_on:
      - zabbix-server
1 Like

This will give me a perfect opportunity to try out the Portainer Stack function.

Thanks

@xMAXIMUSx

I just tried your docker compose script, only changing the Zabbix versions & the database passwords.

For example: Changed zabbix/zabbix-server-mysql:alpine-5.0-latestzabbix/zabbix-server-mysql:latest, since the documentation mentions latest automatically choose the alpine version.

The issue I am having is I’m getting a database error when I try to open the web ui.

image

Sorry. I should have been more descriptive. That builds the environment but you also need to populate the database by running a command on the MySQL container.

That docker compose file was me messing around a while back trying to stand it up myself and I manually did the configuration.

The best thing to do if you want a complete setup with no intervention is to follow the blog post. Your setup would have worked if you populated the DB :slight_smile:

https://blog.zabbix.com/zabbix-docker-containers/

No worries. I thought I must have been missing something.

Thanks

“Containers” exist because the various Linux “distributions” have made a complete hash out of Linux Userland from people fighting over how many daemons can dance on the head of some bizarre element of orthodoxy. The kernel enjoys having an absolute dictator who does a remarkably good job of balancing “new new stuff!” with “don’t break the family china”. The Unix world suffered that nightmare primarily dealing with X, although there was the tar sands of POSIX compatibility - demanded by all and used by none, except for marketing bakeoffs.

To one digit of precision, FreeBSD jails and the equivalent machinery underlying Linux containers are, well, equivalent. They both have the advantage of having one copy of the operating system, or at least a very large fraction thereof, with just enough multiplexing atop that so processes see a big, empty but familiar looking space around them.

The advantage is huge compared with running multiple copies of the entire operating system, aka VMs. There is only one copy of I/O, scheduling, network machinery, memory allocation, etc, etc, etc, rather than having a full copy of all of that in each VM and fighting with each other with the Hypervisor as referee.

This has lead to 'lighter-weight" OS images which omit much of the clever machinery developed for the full operating system because the Hypervisor is going to have the final say anyway. Still, using VMs have costs that are not small, however. VMs are also invoked for security purposes, although the realities of achieving “gas-tight” security on any VM platform is much more difficult than people initially assume. However, again, the price is acceptable in the right circumstances.

So… you pays your money and you takes your chances.

-mo

Sorry for the late reply. I had to put my computer stuff on hold while working on my latest electrical course.

Thank you for all your replies, though it seems you have been talking about VM vs container as a whole.

It appears I was not clear. I was talking about entire Linux distros that are deployed as a
container vs running that same distro as a VM.

This is what I was asking about. What are your thoughts, especially if you have tried one?

Below is a screenshot from Docker Hub showing full Linux distros that can be spun up as a container.