Truenas build for VM Storage

Good morning,

I was looking to get some advice on building a Truenas system that will provide storage for my Proxmox VM’s.
Currently I have a truenas system that I built from old hardware, an i7, 32gb ram, and 5 enterprise HDD’s, in an old PC case I had.
I’m looking to build something more performant for Proxmox storage for my 15 vm’s in my homelab.
I would like to build something that can be rack mounted, as I’m also trying to clean up my homelab at the same time. And if possible something that isn’t power hungry.
Where would the best place be to start, buying a used chassis/MB combo somewhere, or going with a newer processor that would be better with power consumption?
Based on my current usage, I don’t think I need much processing power, but was looking for advice on how much memory I should throw at it. As far as drives, I was thinking of putting 4 870 EVO in it, and have it expandable to 8?
Any advice to help me best understand how to build a Truenas system for VM storage, and what hardware I should be looking at would be great.
Thank you!

If you want quiet and not a lot of power draw, a Truenas Mini might be a very good way forward. After that, something with a Xeon E3 or lower power E5 processor (or i3 or i5 processor). This assumes nothing but storage running on the machine.

I do not think I would run this on a j4125 or N5105, or newer N6xxx, I think the i3 would be the lowest I would go.

32gb or 64gb would be good, more RAM will equal better cache performance.

I would probably put 8 smaller drives in it than 4 larger drives. I haven’t tested my 4 drive lab to a limit yet, but I know it will not be as fast as 8 drives. That said my lab is running SATA2 at a whopping 3GBs on spinning drives. It does OK with 3 or 4 Windows VMs, right up until they all try to do an update at the same time. But everything in my lab is slow, it’s a lab and I can wait for it.

Also don’t forget to leave room for a pair of small SSD for the system drives. The pair will automatically form a mirror. For a lab this may not be required and a single system drive would be OK. You can use USB2, but from experience do not use USB3.x drives for the system, even with the little read/write that goes on, they will overheat and shutdown. Been there, did that, had to change back to USB2 and I think I had to do a clean install and restore the config to fix it.

Thank you Greg for your reply.

Truenas mini is definitely out of my price range.

You are not the first to make the comment about using 8 smallter drives. It seems like (2) 1TB drives is about the same price as the (1) 2TB Drive.

I saw this on Ebay, that seemed to tick some boxes for me. (>8 2.5" bays, SFP+)
2U 16 Bay SFF Server X9DRW-iTPF+ Xeon 16 Cores 64GB 2x10G SFP+ RAID 4x PCI-e x8 $399

I was going to post a link to it directly, but it didn’t seem to format correctly? this is my second post on this forum, maybe it formats after I hit reply? or i’m just doing something wrong.

It seems a bit overkill for what i’m looking for, but the servers that I saw with 10 bays, lacked SFP+, and if I added the HBA, I dont think there was room to put a 10GB SFP+ Nic in it.

The X9 is getting a bit old, but it will work (I have a bunch I just retired). Dell 610 (710, etc) will also work as will HP G8 or G9, but be aware that the HP firmware is locked behind a paywall. Gen 10 and newer are available. If you search enough you can find what you need for the older HP drive controllers and the servers (BIOS and BMC). I have an HP DL360e Gen 8 and bought an HP 420 controller (420i would be better) to get my lab storage working. I also bought used 500MB drives that have been good enough. This is an SATA 2 system, so 3gbps maximum and it’s OK. 4 drives for VM storage, 4 drives for ISO storage. Yes I’d do things differently the next time I tear my lab apart, all 8 drives would go into a single pool the next time.

If you buy that same HP server, note that you need a half height card for a 10gb NIC and you only have an x8 slot, it might only run at x4 connections (I’d have to check). The drive controller takes the larger slot. The onboard controller does not allow JBOD. I wired in an extra power and used one of the onboard ports for the system drive, and all 8 from the card for storage. It’s a hodge podge way of doing things, but it works fine and I don’t mind soldering.