Technical Reference: Virtualization (KVM/Xen) vs. System Containers (LXC) vs. App Containers (Docker)
This guide breaks down the architectural distinctions, maintenance lifecycles, and performance for modern virtualization and containerization.
1. Technical Comparison Matrix
Feature
Virtual Machines (KVM/Xen)
System Containers (LXC)
App Containers (Docker)
Abstraction
Hardware. Emulates CPU, RAM, NIC.
OS. Isolates the Userland.
Process. Isolates a single app.
Kernel
Independent Guest Kernel.
Shared Host Kernel.
Shared Host Kernel.
OS Support
Any (Linux, Windows, BSD).
Linux Guests Only.
Linux/Windows (Host match).
Density
Low. Full OS + Virtual Hardware
High. Full OS Userland & Systemd.
Very High.Single Process. not full userland
Live Migration
Native & Robust.
Complex (CRIU) and not supported in Proxmox
N/A (Reschedule instead).
I/O Speed
Block-level to guest OS has more overhead.
Native (Direct FS access).
Layered/CoW (Native via volumes).
Memory
Static/Reserved but can be dynamic if host OS & apps support it.
Dynamic/Elastic.
Dynamic/Elastic.
2. Resource Contention & Density
Virtualization (KVM/Xen)
Mechanism: Uses CPU Pinning and Memory Reservation.
Isolation: VMs âlockâ their allocated RAM. If you assign 16GB to a VM, that 16GB is removed from the host pool. This prevents âNoisy Neighborsâ but limits Density.
Ballooning (KVM & Xen): A âcooperativeâ method where a driver inside the guest (virtio-balloon for KVM, xen-balloon for Xen) releases unused RAM back to the host. This requires guest OS cooperation and that the applications on the guest OS work with it.
Overhead: High. Each VM requires its own guest kernel (512MB+ just to idle).
Deduplication (KSM for KVM / Mem-Sharing for Xen): A method where the host scans for identical data pages and merges them into a single physical RAM address.
Pros: Massive density increase; can run many identical OSs with a fraction of the RAM.
Cons: High CPU overhead for scanning; potential security risks (Side-channel attacks).
Containers (LXC & Docker)
Mechanism: Uses cgroups (Control Groups).
Isolation: Resources are elastic. A container only uses the RAM its processes actually need.
Density: Very High. You can over-provision 100 containers on a host that could only handle 10 VMs because containers share the hostâs idle resources.
Risk: Susceptible to the OOM (Out of Memory) Killer. If one container leaks memory, the host kernel may kill it to save the system.
3. Maintenance, Updates, & Migration
Virtual Machines (KVM/Xen)
Updates: Managed using guest OS tools and guest is independent of host. e.g. Windows would use Windows Update and Debian Linux would use apt.
Live Migration: Excellent. The Hypervisor moves the active RAM state between hosts with zero downtime.
System Containers (LXC)
Updates: Persistent servers. You update packages inside the container. Since it shares the host kernel, it never needs a âkernel rebootâârestarting the container is near-instant.
Live Migration: Difficult. Requires CRIU and identical host kernel versions. Not supported in Proxmox
Application Containers (Docker)
Updates:Immutable. You do not patch a running container. You pull a new image, destroy the old container, and start a new one.
Live Migration: Not supported. Orchestrators simply kill the container on Host A and start it fresh on Host B.
4. Security & Fault Isolation
KVM/Xen:Hardware-level boundary. A âKernel Panicâ in a VM only crashes that guest. Best for untrusted code or multi-tenant environments.
LXC/Docker:Shared Kernel. A âKernel Panicâ inside a container crashes the entire physical host. Security relies on Namespaces; escaping the container via a kernel exploit grants access to the physical hardware.
5. Summary: Which should you choose?
Choose Virtual Machines (KVM/Xen) when:
You need strong isolation or have strict security/compliance requirements.
You need to run a different OS (e.g., Windows on a Linux host).
Security matters more than density.
Choose System Containers (LXC) when:
You want VM-like behavior (persistence/SSH) without the hardware overhead.
You are running Linux-only services and want high performance.
You want to share hardware (like a GPU) across multiple instances easily.
Choose Application Containers (Docker) when:
You are deploying a specific application/microservice, not a full âserver.â
You want portability across different environments (Laptop to Cloud).
You want immutable containers maintained externally and pulled in as needed (deploy via docker-compose).
I watched this video on youtube and read most of the comments.
After trying to post a reply I found that it may not have been saved so I will post it herel.
Iâve had over two decades of experience in Linux & Linux virtualization technologies & spent considerable time with all 3 mentioned in the video titled âVirtual Machines vs LXC vs Dockerâ.
Many of comments posted on YouTube I noticed were related to LXC.
Although it wasnât mentioned in the Video Iâd recommend you check outâŚ
âlinuxcontainers .orgâ where Incus, LXC and the newish IncusOS are documented and where the User Forums are found.
Summary
History/BackgroundâŚ
LXC started around 2008 w 1st stable release ~2014. About 1-2 yrs later LXD was evolved from LXC.
Stephane Graber was one of the Lead Developers for both LXC and LXD.
In 2023, Incus was "forkedâ LXD to create the Incus Project which today is developed & supported by much of the original LXC & LXD developers with Stephane Graber the technical Lead.
NOTE: LXD is now developed & supported separately by Canonical (long story there)
Today, Incus supports the following types of instances:
Systems Containers
âSystemâ containers run full Linux distributions using a shared kernel. Those containers run a full Linux distribution, very similar to a virtual machine but sharing kernel with the host system.
They have an extremely low overhead, can be packed very densely and generally provide a near identical experience to virtual machines without the required hardware support and overhead.
System containers are implemented through the use of liblxc (LXC).
Application containers
Application containers run a single application through a pre-built image. Those kind of containers got popularized by the likes of Docker and Kubernetes.
Rather than provide a pristine Linux environment on top of which software needs to be installed, they instead come with a pre-installed and mostly pre-configured piece of software.
Incus can consume application container images from any OCI-compatible image registry (e.g. the Docker Hub).
Application containers are implemented through the use of liblxc (LXC) with help from umoci and skopeo.
Virtual Machines
VMs are a full virtualized system.
Virtual machines are also natively supported by Incus and provide an alternative to âsystemâ containers.
Not everything can run properly in containers.
Anything that requires a different kernel or its own kernel modules should be run in a virtual machine instead.
Similarly, some kind of device pass-through, such as full PCI devices will only work properly with virtual machines.
To keep the user experience consistent, a built-in agent is provided by Incus to allow for interactive command execution and file transfers.
Virtual machines are implemented through the use of QEMU.
Incus VMs can be Linux or Windows.
Incus âSystemâ Containers have over 1000 âimagesâ available in its repository for nearly every modern Linux distro (Centos, Fedora, Debian, Ubuntu, Alpine, Mint etc) supporting also AMD64, ARM64, RISC64.
Incus has seen tremendous progress in development of capabilities & features by Stephane & the rest of the Incus dev team.
Some of the biggest features of Incus are:
Core API
Secure by Design (through unprivileged containers, resource restrictions, authentication, âŚ)
Intuitive (with a simple, clear API and crisp command line experience)
Scalable (from containers on your laptop to clusters of thousands of compute nodes)
Event based (providing logging, operation, and lifecycle events)
Remote usage (same API used for local and network access)
Project support (as a way to compartmentalize sets of images and profiles)
Instances and profiles
Image based (with images for a wide variety of Linux distributions, published daily)
Instances (containers and virtual-machines)
Configurable through âProfilesâ (applicable to both containers and virtual machines)
Backup and export
Backup & recovery (for all objects managed by Incus)
Instance Snapshots (to save and restore the âstateâ of an instance)
Container & Image transfer (between different Server Hosts, using images)
Instance migration (importing existing instances or transferring them between servers)
Configurability
Multiple storage backends (with configurable storage pools and storage volumes)
Network management (including bridge creation and configuration, cross-host tunnels, âŚ)
Advanced Resource control (CPU, memory, network I/O, block I/O, disk usage and kernel resources)
Device âpassthroughâ (USB, GPU, unix character and block devices, NICs, disks and paths)
Incus today provides extensive choice & support in the areas of:
Storage
Directory based
BTRFS
LVM
LVM Cluster
ZFS
Ceph RBD
CephFS
Ceph Object
LINSTOR
TrueNAS
Networks (ie âbridgedâ, âvlanâ)
Networks
Bridge network (simple)
OVN network
Macvlan network
SR-IOV network
Physical network
Projects
You can use âProjectsâ to keep your Incus server clean by grouping related users, compute & networking resources together.
Each âprojectâ implements isolated â user accounts, images, profiles, networks, and storage*â.*
Clustering
Incus can be run in clustering mode. In this scenario, any number of Incus servers share the same distributed database that holds the configuration for the cluster members and their instances. The Incus cluster can be managed uniformly using the Incus client or the REST API.
Although it is not intended to be a support forum you can also get an idea of the many great Incus User community created tools & applications browsing through the Incus Sub-Reddit - r/incus.
Incus is a nice project but for most people Proxmox is a better choice as itâs a more complete platform, whereas Incus is a âBuild Your Ownâ toolkit. I know Incus has a web UI but itâs not near as full featured as Proxmox.