I have struggled with this for years. Am I better off buying cheap enterprise dell servers or spending more for low power embedded devices.
I personally have a full cabinet for rackmounting and noise isn’t a problem in my basement room. I personally like the looks of the poweredge servers all decked out taking up a bunch of space with gaps between servers. I know you can find makers to 3D print “rackmount” solutions for embedded gear, but I think it looks kind of chintzy. Others probably disagree.
It looks like a wash between the power I spend a year running multiple 100watt at idle servers and the money I save buying 2nd hand versus the electricity I save and the higher cost of embedded devices. I looked at making a mini itx with epyc embedded NAS server, but ended up going with a Dell poweredge 720 with gen 3 Xeon Cpus and 64GB ram. I would like to upgrade, but Truenas Scale with Plex and Photoprism apps work well.
I never really understood “virtualizing” anything except when I was a poor school student that couldn’t afford anything but one piece of hardware. Now I can afford to buy lots of single boxes that do 1 purpose. What are peoples thoughts?
Lastly, what I actually want to specifically ask everyone is: I am thinking about getting a log server up and running. I see Lawrence has a graylog server video I will check out. I have run Grafana, but these are all basically trimmed down programming languages that have steep learning curves. My grafana instance running on my RPI 4 keept breaking so I gave up on it. I’m thinking about using a Dell Poweredge R220 to just run a log server. It is a 4 core single CPU xeon box. What should I run to be able to get logs from everything and anything under the sun?
I guess it depends on what you run for services. My homelab is modest at best. I am currently running two Wordpress websites (one for my blog and one for my wife’s blog), an instance of Nextcloud for my extended family, A Discourse forum for my wife’s website, Home Assistant, Joplin, Bitwarden, Heimdall, Librespeed, Portainer, Uptime Kuma, Cloudflared, Collabora, Watchtower, and Rsyncd all on the one server. I have plenty of resources to spare. Paperless NGX, Firefly III, Wallabag and either Drupal or Joomla are on my roadmap of things to do, and I don’t anticipate requiring any new hardware to add those services. All of this runs on one server based on a Ryzen 5 Pro 5650GE CPU and 64 GB of ECC RAM. It draws 40 watts at the wall. I do have a second Proxmox node that runs on an N100 CPU that is my Ansible host and acts as a backup target for my first Proxmox node and my NAS boxes. My entire homelab only draws 140 watts on average. When I first started out I had a refurb HP Z640 as my server and that machine alone draws 170 watts at idle. As I said, its a pretty simple set up here: an N100 fanless PC running pfSense, a 24 port managed switch, a wireless access point, a Synology 2 disk NAS, two full time Proxmox nodes, a Pi-Star ham radio hotspot, my Ring Alarm base station, a cable modem and a T-mobile 5-G home internet device (I have redundant WANs). All of that draws 140 watts. I only run a few apps in VMs: the two Wordpress sites, a docker host, the Discourse host and a TrueNAS Scale VM. Everything else is a docker app. So I guess you could say I am in the low power/heavy virtualization camp. But I wouldn’t have it any other way.
Specifically speaking for virtualising things - I used to be the same, having the opinion of having a dedicated box for everything. However, following further studies and enlightenment, virtualising servers and services is a lot more efficient.
With each server, you will have 1 to 2 power supplies and the server will have an idle draw (Fans, drives, CPU, Memory, and heat will all be generated). This will multiply with each physical server.
Studies identified that most servers were sitting there doing very little work, at like sub-10% load. When provisioning for the servers, you need to account for what it will do at maximim load - Power, Cooling, Network etc.
By virtualising many servers in to one, you can remove the need for having many physical systems (Cost and provisioning) and have them all on a single physical system.
Since each server was only doing say sub-10% of work, having many virtual servers on a single machine will mean that machine is then doing more work. A single server at 80% load is more efficient than 20 servers at sub-10% load.
Then you add in scalability, flexibility and redundancy.
New VMs can be spooled up in minutes. Memory and CPU can be added or removed as needed for the VMs. They can be rebooted quickly if they deadlock. And if you have multiple VM hosts running in High Availability (Shared/Pool storage) then in the event of a failed VM host, an active VM can be live-migrated with little to no downtime.
There are many more benefits to virtualisation, with very few pieces of software that don’t play nice with it. Do have a look in to it. You can run a lot more on a VM host than trying to get hold of more systems to spool up a new server. And the flexibility is unmatched.
You are spot on! I spent 6 years at AWS in the Cloud Economics team. We would look at CPU utilization and memory utilization to size migrations. On average, the CPU utilization over hundreds of enterprise migrations was ~29% and the memory utilization was ~43%. That didn’t include all of the idle servers that we would advise the customer to just retire.
Yeah I was quite surprised when I learned some of the stats and the many benefits. It really was an eye-opening moment, and I am not ashamed to say, steered me away from what was previously a foolish (albeit mostly uninformed) mindset.
Tom, Wendell (L1Techs) and Jeff (Craft Computing) are hughe advocates for it, and for very good reasons. They have done a huge number of videos to help understand and advocate for virtualisation, and its all based on solid enterprise experience.
Not only that, but anything running on my Proxmox host, if its in the same VLAN, network traffic doesn’t have to go out to the switch or the router. Staying on the virtual bridge is just so much faster than my 10g network. So there are other benefits too
So i have a dell r220 nobody will buy on marketplace i dont want to throw away. Can i do much virtualization on a quad core xeon with a max of 32gb? Proxmox, xcp ng and i think xen server was a thing if it isnt still. I dont know where to start. Can anybody link me to some forum posts or good videos?
Aside from Tom’s channel, Tom’s friend Jay LaCroix over at Learn Linux TV has a bunch of great videos you should check out. I am not sure what model Xeon you have but as I said, I run a Proxmox host on an N100 CPU. That’s not an endorsement of Proxmox over other hypervisors, but I think it demonstrates what can be done with a modest CPU. I also really like Christian Lempa, Techno Tim, Jim’s Garage, Hardware Haven, Raid Owl, Craft Computing, Wundertech, Wolfgang’s Channel and Network Chuck, just to name a few of my favorite Home Lab people I follow on Youtube
By all means thats more than enough to virtualise.
You can happily over-provision CPU (Assign CPU cores to multiple VMs even if you have assigned them to another) as the hypervisior will schedule processor work.
What you are limited by is the RAM, you can’t (Technically sometimes you can, but it is not advisable) over-provision RAM to a VM. So many VM hosts will have huge amounts of RAM to support their VMs.
If power/heat/noise are not a factor, the used enterprise equipment will give a better reliability and feature set. Things like IPMI and a glut of cheap accessories really help.
If I was to go back down that path, a Supermicro Big Twin with quad main boards would be high on my list. I need to get rid of my HP DL360 gen 8 servers, rack space is about to disappear, downsizing to mini style with HP T740 for lower power/heat/noise and really for my lab that should have plenty of cpu and ram (64GB max). I’ll learn to live without IPMI.
I personally went with a dell r630 and r640. I like the reliability and parts availability of used enterprise gear. And I have a 42u rack in my garage. It’s way overkill. I do it because I can Tho I did look into doing some tiny pcs and running proxmox, etc on a couple of those.
You could use the Xeon, the limit you have there is more the RAM, 32GB won’t do for a lot of VMs. It depends on what applications you run in the VMs and how much RAM they need.
Personally I decided not to use my old Xeon, although the board has IPMI. Old boards and CPUs can be very high on power usage and depending on where you live this can become quite expensive.
You do not need to decide between small PCs and rackmounted enterprise servers. Another option is building some low-power rackmounted server using a mobile CPU. This of course is more effort and more expensive than the other 2 options. You don’t really need the reliability that enterprise quipment is offering, although IPMI is very nice to have.
If you are keen you can calculate how much power and thus money you could save and this invest in your own server build.
Virtualization makes a lot more sense for the home lab than many individual machines. The reasons have been laid out nicely by @Preybird. It is not only that you have more efficieny, you also get additional benefeits like easy snapshotting of machines, being able to clone a machine and to make experimental changes.
Virtualization also has huge benefits regarding network infrastructure. You can have a much better structured network without needing to investin in lots of network devices. You just need a manageable switch (capable using VLANs). Also network latency is much better for virtual infrastructure.