I’ve seen other discussions here about open source alternatives to Acitve Directory, but my question is, if I am studying to be a well rounded IT engineer, what should I be looking at with regard to Active Directory / Doman Controllers? Is it a Microsoft world? Should I just concentrate on MS ecosystems or are the underlying concepts transferable? I would prefer to use open source for learning purposes. Any thoughts would be greatly appreciated.
The real world will use MS server with Active Directory. The Linux alternative is only good if it is a real small shop and doesn’t care to use features like O365 and need basic group policy settings and user management.
You can stand up a windows 2022 server with 180 day evaluation. Same for windows 10 pro. You don’t need a license for testing purposes.
Windows 10 and 11 are only 90 days at a time for the evals, but can be rearmed a few times (6?). The server evals are 180 days and I’m fairly certain you can rearm them 6 times (3 years worth of time). I think there are some ways to get a free eval of MS Entra (Azure) but I’d have to look for it again.
Here is the eval site for your lab:
All you need is a valid email address to register the downloads. A somewhat stout computer running Proxmox, XCP-NG, or even VirtualBox would be all that remains to build a lab.
Thank you guys. I had a feeling that most real world scenarios would be MS Server. I’m running XCP-ng. Could you define “stout”? Cheers.
I assume you are talking about stdout?
Ha, no - I was responding to @Greg_E 's comment “A somewhat stout computer”. I’m just wondering what an appropriate number of CPU’s, RAM amount and HDD size would be to run MS Server 2022 comfortably.
For lab use, a lot of people get by with 4 cores or 8 threads and 64gb of ram. You can run Server 2022 on 4 XCP-NG cores and 4gb, but I don’t recommend it for more than light weight testing. 8 “cores” and 8 or 16gb of ram work better for me. Hard drive depends on how much stuff you expect to use. With thin provisioned files, I’d say 120GB is plenty for almost anything you might want to do with Server 2022. Big database or big storage being the exception. And since they are thin provisioned, I’m normally around or less than 60gb (real used) for my two AD servers (each). And it’s only an extra 6gb to 10gb for the desktop version of the OS, yes you should learn to run without the GUI, but I just don’t have the time to deal with it.
If money is available, you can often find used HP/Dell servers with 20 cores and 128gb of ram for $200 shipped. These are old dual Xeon E5-xxxx v2 processors, but plenty of power for most labs. My production servers are only Scalable Silver (10c/20t) with 128gb of RAM, though I’m running into a situation where I may need to upgrade CPU and add more RAM. Might be adding an application that needs more CPU to finish faster.
Thanks @Greg_E !
I have a Server build using a SuperMicro board, 2 CPUs and 128Gb of ram.
As long as it is newer than the Xeon X56xx series of processor, you should be fine. When I had my older servers with X5650, Windows Server 2022 started to have problems, this was a bit over a year ago. I think they just deprecated enough that it wasn’t working well anymore.
If you are in the X10 or maybe X9 series main board, I think you will be OK. Serve the Home has a bunch of little $300-$500 mini computers that they claim are good for home lab with Proxmox, things with 4 or maybe 8 cores. I guess it’s up to each person to decide what a home lab should be and how much performance you need.
One thing I suggest, set up a pair of domain controllers, with a pair of DNS and pair of DHCP. That’s fairly typical in real world, and brings some oddities that need to be monitored and managed with the sync between them.
I also suggest setting up another server for Windows Admin Center (free), there are still a bunch of things that don’t work completely, like lack of GPO, but you can get a lot done in WAC, especially if you have “headless” server installs. Though as I said before, you really should learn to handle these servers with command line (PowerShell), I know it holds me back once in a while.
And don’t get hung up thinking you need a 10 gigabit network to work with this stuff (if you add more XCP-NG hosts), I ran for a long time over gigabit to my Truenas server with 3 hosts, it didn’t really have a big effect when I went to 10gb because I got a deal on a switch and already had the 10gb cards. Some things were faster, but in a lab that isn’t really needed. The biggest issue I had on gigabit was when I had several Windows servers doing updates at the same time (second Tuesday/Wednesday of the month), the path to the storage got a bit full and slowed things down, but that was really the only time it bothered me. Getting the 10gb was more to mirror what I wanted for my production system, and even that I ran on mostly gigabit for almost a year while I swapped things over.
If you think you want to access your lab remotely, take a look at Kasm Workspaces (Kasmweb), it can provide web to many things and one of those is web to RDP if you install the server correctly. You can also load Reminna to RDP. Drop a Chrome (or other browser) so you can get to XO or WAC, etc. Gives you one more layer for an attacker to fight through to get inside your network. And it plays DOOM.
Thank you Greg for all of your advice and knowledge, I really appreciate it. I have a Supermicro X10DRi-T with dual XEON E5-2680v4’s. It still boggles my mind that all of that cost only $141.80. I’ve got 128GB of ram and 36TB of storage. I’m running TrueNas scale as a VM (LSI 93000-16i via PCI pass through), and Xen Orchestra on another Debian VM. Cheers.
If you can step up in the future, I would move Truenas to its own hardware.
Otherwise your lab should be fine and way more storage than I have in either of my systems. For $150 I’d probably buy a few, I have two of the X10DRi boards, both are dedicated Truenas, but both have much lower processors in them since I bought them for Truenas, and only 32gb of RAM.
One thing to watch out for on those X10DRi boards, I’ve had some problems with 10gigabit cards in some of the slots. The cards would work fine in other systems, but would eventually fail in the NAS servers. I used to run 2 cards in each and aggregate them, not sure what the deal was, but it was certainly an issue, an issue that seems to have solved itself. Same with another X10 board in a smaller system, the 10gbps card would work for a while, then quit, then work again. Finally it decided to stay working and I have it as a back up storage for the production XCP-NG system, a little 4 drive 2TB system which is enough to push all my VMs over so I can work on the primary NAS. It was a recycled system that I’ve had for years and removed that function with changes in workflow.