I currently have a Truenas server, running on hardware from 2013. It serves for archive storage onto 6 2TB HDDs. I also have Jellyfin running on it.
I’m looking to build a new server built around the AMD RYZEN 7 Pro 4750GE with 64GB ECC memory. Hoping for power efficiency, but this time around I need to use it as shared storage for my 2 XCP-NG servers.
I think the iGPU should be fine for transcoding jellyfin.
I was also thinking about using a pcie card that will hold 4 m.2 SSDs for fast shared storage, and moving my 6 2TB HDDs for slow archival storage.
So I believe I will need a motherboard that supports bifurcation to do this?
I will also need another slot to put my 10GB SFP + card in.
Anyone else here doing a similar build and can offer advice?
I don’t really keep up with hardware that much but it no one here answers then you might want to post in their forums as there are way more build discussions Topics tagged hardware
Asus is going to be the play here. I am not aware of too many other AM4 boards that support 4x4x4x4 bifurcation. Here is a chart of compatible boards. Most of them are gaming boards and are not going to be very low power. https://www.asus.com/support/faq/1037507/
But having said that, I do have a couple of points of view for you to consider: First, why go with a 4750GE? Its not too difficult to source the newer 5750G and GE chips on ebay. Second, for a storage server, the Ryzen 7 pro seems a little like overkill to me. I built a server last fall based on Ryzen 5 Pro 5650GE, and I never go above 5-10% overall CPU utilization running Proxmox, 5 VMs and a virtualized TrueNAS instance. If you are only putting 64GB of memory into it, you will run out of memory long before you run out of CPU cycles. Finally, depending on what OS you go with, consumer M.2 NVMEs may not be a good choice for a storage server. Anything with ZFS will chew those us pretty quickly. You would probably be better off using used enterprise SATA SSDs, in mirrored VDEVs. I have 3 mirrored VDEVs in my TrueNAS box and for sequential reads and writes, it comes awefully close to saturating my 10GBE NIC. Even if you sourced high endurance, enterprise NVME drives, your 10gbe NIC will be the bottle neck. NVME drives are much faster than a 10 gig network can handle.
I picked it mainly for the PRO’s ability to do ECC, and the power efficiency.
It does seem like the number of PCIE lanes may be an issue too if I go with NVME. So I might have to rethink my CPU choice.
Some of the boxes I want to check:
ECC memory support (128GB minimum, but I want to see how it does on 64 to start)
Needs to easily provide shared storage to my VMs, as well as my docker containers.
Would like for it to be power efficient, 60-100W will be fine, doesn’t have to be very low. My old server was using about 150W, and I would like to make a noticeable dent in that if possible.
I have to look at enterprise SSDs. Maybe just using some of those in my pool will give me the best of both worlds, I really just have to check to see if they fit in my budget.
Well, this is what I built. I based the motherboard off of the one the 45 Drives people put into their HL4/HL8 servers. Same for the M.2 to SATA adapter:
Gigabyte B550I Aorus Pro AX
AMD Ryzen 5 Pro 5650-GE
Nemix RAM 2X32GB DDR4 3200 ECC Uunbuffered
M.2 to SATA 6 port adapter, ASM1166
Noctua NH-L12Sx77 CPU fan
Corsair RM Series RM650
Fractal Design Node 304
10Gtek 10Gb PCI-E NIC
Cost me about $900 all in. I run Proxmox as the OS and as I said I have TrueNAS running in a VM. I have 10 SSDs stuffed into this box. For the Proxmox OS, I used the 4 motherboard SATA ports, and I have four 1TB Samsung SM863 drives in two mirrored VDEVs. This layout sort of approximates a RAID 10. I get redundancy, and a bit more speed from this set up than I would with just one mirrored VDEV. It yields 2TB of usable storage which is more than enough for the OS and all of my VMs.
I do a PCI pass through of the ASM1166 controller to Truenas, giving TrueNAS 6 drives. I have those arranged in 3 mirrored VDEVs. I don’t have a tone of space because this is my secondary NAS. I used 2 TB drives, and have around 5.5 TB of usable space. I mostly use this for docker volumes or NFS shares to VM and my K3S cluster. All my important documents, pictures, etc., are on my Synology, so my wife can access them as well.
My set up idles at 20 watts without the drives and idles at 40 watts when fully loaded. I have three NVME’s in the machine, being used by TrueNAS as cache/slog. I don’t find they really help much for my set up though. I may do something else with them.
One word of caution, the Noctua CPU cooler I chose doesn’t quite fit without some case modifications. The drive caddies will not fit with that cooler installed. I had to modify them on my bench grinder to get them to fit.
Do you have any issues with performance with your usage of the 3 mirrored vdevs? Mine might be similar, so that looks like a good way to go. NVME in 2 mirrored vdevs might be overkill? (or just not as good because I would be using normal desktop nvme)
Do you have any issues with performance with your usage of the 3 mirrored vdevs?
Issues? No, none at all. Mirrored VDEVS are best practice for ZFS performance. I had a drive die, but didn’t lose any data. NVMEs are overkill anytime you are serving them up over the network, unless you have crazy fast 100GBE networks or something like that. But for anything up to 10gbe, they are just WAY overkill. Using them within a server as direct attached storage is different situation.
So based on some recommendations here, and on the Truenas forum as well I’m thinking about going with the following setup:
ASRock Rack E3C246D4U2-2T
Intel Xeon E-2224G CPU
64GB DDR4/2666 Memory (to start, may bump up to 128GB)
1TB SATA SSD MZ-7KM9600 Samsung 2.5 Enterprise SSD
ASM1166 in the M.2
I was going to use the 6 sata ports to connect my HDDs.
Using the M.2 means I can only use 1 more onboard sata for the OS. Do you think its best to just use that one port for the OS drive and have the 6 remaining SSDs in 3 mirrored VDEVs as part of my “Fast” pool, or have 2 mirrored VDEVs as the pool, and have a mirror for the OS?
Please let me know if you have any criticisms of the build so far, any comments would be greatly appreciated!
LGA 1151? That’s not much newer than the hardware you have now (that socket was released in 2015). That CPU is from 2019. I kind of feel like the CPU I have is pretty old having been released in 2021. I mean you can get an Intel i3-12300 part, have 60% more computing power (depending on the benchmark you look at) and 15% less TDP. If you go for a 12300T or a 12500T you will be at 1/2 the TDP.
I did start heading to the 12th generation i3 CPUs, I was going to go with that, I did see recent posts that the linux kernel that was used in truenas didn’t support ECC for those, but I think it was coming soon, so it made me not really want to go in that direction.
When I looped back around to xeons, I started with this one that looks like it was released in 2019.
But you do bring a good point, I’ll definitely take a look at the 12th gen xeon counterpart and see how the pricing lines up.
EDIT:
I could go with the Intel® Xeon® E-2324G, and I think the ASUS P12R-M LGA 1200 motherboard was the option with that.
I really do like where your head is at, If I’m going to be upgrading, I should go for the best bang for the buck and not the cheapest route that turns out to be barely and upgrade at all.
I think a lot of folks, especially on the TrueNAS forums are overly enamored with legacy enterprise gear. I am not sure that is the best route for a home lab.
Oh, and I believe this has nothing to do with the kernel. It is about the motherboard chipset. If the chipset/motherboard doesn’t support ECC, then there is nothing the linux kernal can do to enable it.This board will support ECC with any 12, 13 or 14th generation i3 or i5 cpu as an example.