Went all out on build, need suggestions with deployment/setup ideas

Greetings, new to the forum. Have had many different small NAS and server builds in the past. Decided to go big, why not:

  1. Fractal Design Node 804 Case
  2. Supermicro MBD-X11SCH-F-O Micro ATX Server Motherboard (8 x sata ports)
  3. Intel Xenon E-2146G Processor 12M Cache 3.5GHZ FC-LGA14C MM974864 (6 core)
  4. 4 x Crucial Server Memory 16GB DDR4 DIMM 288-pin - 2666 MHz / PC4-21300 - CL19-1.2 V - unbuffered - ECC (64GB total)
  5. Noctua NH-L9x65, Premium Low-Profile CPU Cooler (65mm, Brown)
  6. 8 x 8TB WD Red Pro NAS Internal Hard Drive - 7200 RPM Class, SATA 6 Gb/s, CMR, 256 MB Cache, 3.5" - WD8003FFBX (64GB total)
  7. Seasonic PRIME 850 Platinum SSR-850PD 850W 80+ Platinum ATX12V & EPS12V Full Modular 135mm FDB Fan Power On Self Tester PS
  8. 2 x WD_Black 250GB SN750 NVMe Internal Gaming SSD - Gen3 PCIe, M.2 2280, 3D NAND - WDS250G3X0C (1 for boot, 1 for cache or both boot in Raid 1)
  9. Replaced all case stock fans as follows, and plugged them into MB (not the case rigged 3-pin fan board, I removed it):
    (a) Noctua NF-F12 iPPC 3000 PWM, Heavy Duty Cooling Fan, 4-Pin, 3000 RPM (120mm, Black) for right front intake fan blowing over “drive side”
    (b) Noctua NF-F12 PWM, Premium Quiet Fan, 4-Pin (120mm, Brown) for left front intake fan blowing over “MB side”
    © Noctua NF-A14 PWM, Premium Quiet Fan, 4-Pin (140mm, Brown) for right rear exhaust (upper)
    (d) Noctua NF-F12 PWM, Premium Quiet Fan, 4-Pin (120mm, Brown) for left rear exhaust (lower)

As to the purpose of the box (2-3 users max): NAS (archival storage), Plex, several VMs, OpenVPN, Backup, PXE booting other machines.

Box is powering up and posting! Yeah. Pics to come later after adequate cable management and aesthetics.

Any hardware comments/critiques appreciated.

I am dumbfounded at all the BIOS options on this board, and have downloaded the 100+ page MB manual. (Suggestions on settings appreciated)

Intend to do link aggregation. I also have a 4tb Samsung SSD which I may purchase a PCIE controller for and use for something (what?)

I know many may say I over-did it (especially on the PS), but I wanted good quality and a machine I can hope to get long use out of.

I believe TrueNas is the way to go here. I am little concerned about the VMs and wonder if I should do this as Proxmox first, then TrueNAS as a VM?

All comments and suggestion much appreciated. I’m a lot better at building than I am setup. I’ve looked at numerous Youtube vids.

Thanks and I appreciate all constructive input. I do realize this all depends on preference and intended use–it’s me and a few others here at home with a lot of archival data and pics and docs, etc. Small home biz as well in place. Would love to use a bunch of cheap machines and PXE boot them.

Structuring the NAS will be important, as well. Obviously ZFS, I’m thinking 4x8=32gb for two separate pools. Have offsite storage via reputable provider to get my 3-2-1 in place.

Just wondering any opinions and ideas on viability of hardware setup as well as how to proceed as a TrueNAS install. Nevermind the nitpicky particulars, how would you configure this if it were yours?

Thanks in advance!

I personally do not really enjoy running VMs / Jails on FreeNAS/TrueNAS. TrueNAS is a fantastic NAS OS, however it isn’t my preference for virtualization.

My preference is storage on TrueNAS and a separate host for Proxmox to run VMs. I am by no means saying this is a better method, it is simply my preference.

I also prefer not to virtualize TrueNAS, but in this case I think I would consider running Proxmox with virtualized TrueNAS.

If I were in your shoes (I know you probably don’t want to hear this…but you did ask how I would configure it!) I would build a second server. 1 for TrueNAS, 1 for Proxmox. TrueNAS operates very well with consumer CPUs so there is no need to get another beefy Xeon CPU. I also don’t think ECC RAM is necessary, but its a controversial topic.

Also - NVMe is totally overkill for a TrueNAS boot device! I am running my TrueNAS server on 2x SATA 120GB PNY SSDs. They are nothing special!

Sorry to contribute what might not have been the answer you were looking for… but you did say how would we do it- so I answered honestly.

GL with the new server!

I don’t care much for running VM’s in TrueNAS and I have talked many times on my channel about issues that may happen when you virtualize TrueNAS. It might all work, part of the fun is building it all and finding out :slight_smile:

1 Like

Thanks for the feedback, and I think you’re right. My original intention was to run a separate server for proxmox, VMs, etc. Maybe do vlans. I have Pfsense running on a small appliance called a “Protectli”. and I actually have another box i ordered – it’s tiny dell optiplex 5080, (Core i5) 32GB ram, 250g M.2 drive and a 4tb SSD. I think I could do proxmox on the dell machine, and truenas bare metal on the NAS monster for best results. Oh, and as far as the booting TrueNAS from NVME - the 2 slots were available on the Supermicro board, and I got the 2 new WD Black M.2 NVMEs for $50/each - cheap at the price, and FAST.

That would be a solid solution. Use the hardware purchased for the TrueNAS server and the Optiplex for Proxmox. Depending what you are running on Proxmox - a i5 Optiplex can be a really fine Proxmox host.

In terms of your TrueNAS config- I’m no expert here but I think I would do 4x vdevs each with 2x of your 8TB drives. This would result in 30TB usable space. Is that sufficient for your needs? If your needs grow (when they grow) you could add a DAS and expand the pool in small 2 disk vdevs.

Mirrored pairs is also optimal for IOPS performance.

Thanks Charles, yeah I am coming from nas4free (cough). So, I’m going to have to learn what you mean by vdevs. And yes, 30T is perfect. So the vdevs I’m guessing would be pairs of 8tb set in Raid 0, and then a single large pool of each of the 4 vdevs? This is stuff I should know. I have Dec 23-Jan4 off to dig into documentation and setup. But does that sound about right?

I would do things differently:

Use both NVME for boot, or buy two more slower SATA drives and a small controller card and use them for boot.

I would most likely put all 8 drives into a RaidZ, you may lose a little speed this way, but I’m not seeing it on a similar mainboard with 10tb drives, this should still give you around 30TB or maybe 40TB space. I have around 64TB from my eight 10TB drives.

There is a lot in the Supermicro BIOS, most of which you will probably never need to touch. I have X7 up through very similar X11 boards, now only X9, X10, and X11 in regular service, but the old X7 come out once in a while when needed for a no money short term function. I also have a few Supremicro Intel Atom powered servers, one of them goes back to an old D525 with whopping 4gb of RAM. Good solid machines with nice open features and drivers that have never been blocked behind a pay wall. The few times I’ve needed it, support has been really good.

@vikingred If you are interested in some good reading on how FreeNAS (ZFS Pools) work and the understanding behind vdevs and such - check out this fourm post from Tom, as well as the docs he links to. I read this stuff through when I began with FreeNAS and it helped clear up a lot of confusion.

While I agree that RaidZ (or RaidZ2) would provide more space than a mirrored pairs configuration - my desire to avoid it is that it makes expanding the pool far more difficult. In a business environment (or maybe someone else’s home environment!) purchasing 8x 8TB drives at once is not a problem. I prefer not to spend that kind of money on storage all at once if I do not need it all right now. If I were in a different financial situation where 8x 8TB drives was no problem, then I would be inclined to agree- RaidZ2 would be my preference as it results in a higher percentage of usable space.