More videos about Docker

I am trying to get a more solid grip on VM’s, docker etc. And if I understand it correct, when using VM’s they can not share a CPU core. So the number of VM’s you can run is effectively limited by the number of cores you have in your computer. Is this correct?

If you instead use Docker, they share the resources just like any other programs running on a computer (or a single VM). The downside is that it seems a little more complicated to me to use Docker. Especially the networking part of it.

I have a server that I am setting up right now. It is an i5 quad core with 8GB RAM, SSD system drive and a 2 x 4 TB drives in a mirror config.

So far, I have loaded OpenMediaVault on it, Docker and Portainer. I have managed to set up PiHole and got it on a different network address using macvlan. To be honest, I think that is much more complicated than it really needs to be… The next project is to get NGINX proxy going and set up Let’s Encrypt. Then get NextCloud running. The traffic on the server will be pretty low.

But it would be great to see a video doing a little comparison explaining similarities and differences. Also, more about the networking for Docker, and especially macvlans would be great!

That’s kinda the point of virtualisation the vms share the host processor. You can just test it yourself, spin up 20 vms and see what happens.

Personally I prefer vms as it gives me options at the cost of resources, currently I have about 20 vms running which are doing ok on the hardware I have. If you are limited on resources then docker is the way to go. I’ve seen this in Proxmox but not dabbled in it.

2 Likes

I may do an ex plainer video on the topic, but there is a good write up over here at Backblaze comparing the two technologies https://www.backblaze.com/blog/vm-vs-containers/

2 Likes

So you are saying I could run xcp-ng on my computer with more than 4 VM’s? I thought you had to assign at least 1 core to a VM and that you should not assign more than one VM per core?

Is it ok to run a NAS in a VM? I need a NAS to control local storage. In addition, NextCloud, nginx proxymanager, pihole and maybe one or two more. But all will be low traffic. Only as support for a single household.

Another idea is to use Duplicati to back up data encrypted to a similar setup at a friends house. So with mirrored drives, local USB backup and backup to a remote site, I think it should be pretty ok.

My impression is that it is easier to deal with networking on VM’s. Is that correct?

Thank you for the link.

Looking forward to your video!

Could you add two more videos?

One is about Duplicati.

The other is about SoftEther. Yes, I know you have said you prefer projects where the code has been audited. But to get there, this software needs more publicity. And it deserves it. There is a lot of stuff in that software that you do not find in any other VPN/routing software. Just the fact that it is made outside of the USA, and it is probably the fastest VPN software you can find (until you have tested it, you can not say anything else :wink: ). So I would suggest you do a test and a program about it and say what you mean. Maybe even get in touch with the devs.

No matter what - thank you for all the good stuff you are doing. If you ever come to Brazil, I would love to meet up!

@Oceanwatcher I would advise you to pause for a moment.

What applications and/or services are you wanting and needing?

Once you have the answer to these questions you can decide the best way to host the services. Many applications can run in docker or in a vm. The reason you pick one or the other will be decoded once you have a reasonable starting list of what you want to run.

For example if you want to have one nas in your house there is no reason to purchase a high power virtualization server, even adding the complexity of docker may be over kill. But if you wanted a web server, nas, firewall, VPN, and home automation. You need more hardware and decisions of what to host where will need to be made.

As always if it is for purely learning purposes then you can do things just to do them. But it is best to try to focus on a real life scenario when possible.

1 Like

@Thedannymullen Sorry to point to the obvious, but I think I have been very clear about both what I have of hardware at the moment, and what I would like to run :wink:

But - I don’t min repeating.

A NAS - I am familiar with both FreeNAS and OpenMediaVault.
NextCloud - I am already running it at home on a Raspberry Pi to try it out.
PiHole - already up and working
Nginx Proxymanager to get a little better control of things including Let’s Encrypt
Portainer to manage Docker
Duplicati to back data up offsite

Anything else will be just for learning. All of it will be pretty low traffic.

I have a server that I am setting up right now. It is an i5 quad core with 8GB RAM (can easily be bumped to 16GB), SSD system drive and a 2 x 4 TB drives in a mirror config.
And I am already running OpenMediaVault with Docker, Portainer and PiHole. DDNS is set up. It is running fine so far.
From what I read on the site Tom sent a link to, Docker uses less system resources. The part I am struggling with (among many things) is the networking of Docker. I like the macvlan and want to understand it better. I prefer keeping each “server” on separate IP’s so I do not have to worry about conflicting ports.

That is all for now, but if I get all working the way I want, the idea is to replicate this several places and use each one in a buddy backup system.

1 Like

You are not limited to how many VM you can Install based on cores. I dont think CPU will be issue for you, but RAM is way too low.

You can install NAS as VM too, I did before on ESXI and passed through raw access to HDD which I would use for storage.

Docker can be used for very small application which you dont need to tinker with, but if you need it to do stuff like PiHole, NAS etc I would not do that.

Good Luck

1 Like

You can indeed run more than four vms with all four cores assigned, how well they run is another matter. I would just set up your vms with the services you want and see how they go. You can always modify it, if it’s moving like a dog. 8GB is too low as @abhay9 says, max that sucker out it will be your main bottleneck your processor will probably address 64GB.
If you are running Debian than you can do a minimal install which is CLI only so you save a bit on the resources without the GUI for sure it boots up faster.

Personally I run OMV on its own hardware, you can run OMV in a VM you will get better performance if you pass through the hdd. Again just test it out. I suppose the general rule is to have three backups.

Whether networking is easier on VMs depends on your knowledge, I suppose you have a single LAN, it will work. Most of this stuff is trial and error!

Had a quick peek at Softether, it’s an alternative to OpenVPN which is a pretty solid product. I doubt many users would switch over once on OpenVPN. I use AirVPN and their services are based on OpenVPN, I’d be confident it’s secure and the speeds I get is about 95% of my ISP so I’m ok there.

However, the main reason I would continue with OpenVPN is that the issues others have faced and resolved is well documented.

I think you need to take a new look :slight_smile:

SoftEther is an alternative to a VPN server with multiple protocols. So it is definitely not an alternative to OpenVPN.
SoftEther is an Open VPN server. As well as several other protocols. L2TP, Azure etc.
It is also an alternative to a CISCO VPN where you can use CISCO clients to connect.
And it is a full networking system where you can do routing.
One thing that a lot of VPN protocols is not good at is speed. I have tested OpenVPN and the Softether protocol side by side (I am not only talking about the SofEther program/server). SoftEther easily beats anything else I have seen. OpenVPN does not even come close.

So yes, there are really many reasons to look closer. SoftEther is not just a protocol. Dismissing something without testing and really checking things is not very open :wink:

I come from the last century :slight_smile: have so many mini-projects to do that time and skill are my main constraints, for sure I’m biased towards OpenVPN because it has served me well for 10 years ! There’s loads of alternatives out there but time is less abundant to inspect these. However, as governments want to crack down on encrypted connections there’s no downside to more choice.
Personally I think you will always have a trade-off between speed and security when it comes to encryption, unless you have the education it’s pretty tough to work out your optimal encryption settings.

Boy do I know that feeling! Tom will appreciate this, the first computer that my parents bought was a TRS color computer. I had been learning on Apple and Apple II, the apple and IBM clones, and one of the first models of IBM desktops that had a hard drive in it. We also bought a Timex Sinclair computer, and yes I did spend hours typing away entering basic programs from Byte magazine. And hours loading stuff from cassette tape.

LOL Timex Sinclair must have been the ZX Spectrum in the UK that was my first computer ! yes I too remember entering lines and lines of code from computer magazines … those were the days !

We are now over in the realm of dinosaurs :laughing: And I am one of them!

My first computers were the ZX-81 and at school we had a Tandberg EC-10 with two extra terminals and an 8 inch floppy drive. They all shared the 64 KB RAM! And I know I still have two floppies from that system somewhere.
I was on the internet before there was any web to speak of. I used Pine for email, but did not know anyone else that had internet, so I sent an email to the admin and got an email back. So at least it worked. For “browsing” we used Gopher.
I was also a big user of different BBS system and even ran my own for a while using Wildcat as the BBS system. Fun times!

Here another oldtimer. My first computer was a Texas Instruments TI99 4a, followed by a C64. I also ran an amateur bbs ( on proboard) and a fidonet node in the early nineties (92-96), and my first internet experience was with gopher, winwais, email and ftp. I remember bbs lists of the first 10 or so websites which everyone flocked to just to test http with the mosaic browser :slight_smile:

If you’re interested in making your lab lightweight but more easily configurable than docker, try lxc/lxd. With those you can build containers which consist of a whole (commandline) linux distro rather than only a specific app and its dependencies. They consume less resources than a full vm and are more configurable than a docker image.

If you’re more into BSD than linux, BSD jails are comparable.