Optimizing my NAS

Hi all,

I’m looking for some expert advice on how to optimize and get the best performance out of my NAS. I’ve been running my current setup for over 8 years now with only two HDD failures during that time—rock solid overall.

Current Setup:

Software: TrueNAS Core (Community Edition) running in an usb drive (internal motherboard drive)
Motherboard: Supermicro X9SCM-F-O
CPU: Xeon E3-1240 V2 @ 4.2 GHz
RAM: 32GB ECC Unbuffered
Storage: (5) 2TB WD Red (LZ4 compression) in RAIDZ2

Hardware I’m Considering for Optimization:

  • Mass Storage: (2) 10 TB HGST Ultrastar He10 HUH721010AL4200
  • (1) SATA SSD for boot drive
  • (1) SATA SSD for apps, and plugins (will consider an NVMe via PCIe adapter instead)
  • (1) SATA SSD for Cache (or can this be in the same ssd for apps and plugins?)
  • (1) LSI 9211-8i HBA in IT mode (to support future drive expansion)

I’d love to hear your thoughts—am I heading in the right direction with these upgrades? Any suggestions for better performance, reliability, or future-proofing would be much appreciated.

It comes down to what you are trying to achieve. What is your goal or workload you are expecting? It’s hard to advise optimizations without knowing what you are trying to do.

+1 to what Maximus said. But in addition, I would probably look at CPU and mobo upgrades before I got to wrapped up in new drives. You are running a 13 year old processor that runs on DDR3 memory, with a 69 watt TDP. You could upgrade to a much newer process that would have better performance and a 15 watt TDP, in addition to DDR4/DDR5 memory. One example (and its just an example, not a recommendation) is the Pentium Gold 8505, with roughly 50% more single thread performance. All of which is to say I think you would get a lot more performance bang for your buck with more modern hardware than newer drives. I really think 64gb of DDR5 memory would speed you up a lot more than an SSD for cache, for instance.

Regarding your drive choices, just some observations:

  1. I think I would prefer more than two main storage drives.
  2. I personally would prefer mirrored boot drives. If one drive fails you are still on line.
  3. SATA SSD for cache? I would rather see you go with NVME and get some real speed.
  4. Whether you go with SATA or NVME drives, you really want enterprise drives, with power loss protection and a higher overall TBW. ZFS write amplification is a real thing.

xMAXIMUSx thanks for your reply. Thanks for pointing that out, I should have included that on my original post. I have been using this NAS setup for 8+ years for streaming movies, home movies, pictures and storing files…words, pdf etc. What I intend to do with my proposed upgrades is to keep using it as before, but this time I would like to have two pools of drives. One of the pools will be dedicated solely for storing files from Channels DVR and NextCloud. I would like to be able to access these files remotely, and securely without compromising my other pool. So I will be looking into setting up tailscale. The other pool I will use it for personal files, and to lacal stream movies and home movies. Hope this will give you a good idea of what I intend to use this NAS for. Not sure if I am going the right way about this, but will appreciate any advice on how to better approach what I intend to do.I am definitely not an expert on this, so come here for advice and also enjoy watching Tom’s tutorial to help me understand a little better what I am doing. I also appreciate Louie1961 advice on newer hardware with more efficient power consumption. I will definitely consider his advice when putting together a totally new system in the future. Thanks again for taking the time reading and replying to my post. I really appreciate it. Have a good week.

Hi Louie1961, thanks for your reply and advice on newer hardware for a more efficient power consumption & performance. I exwill definitely be looking into putting a new system probably in a year or two. For now I will like to breathe a little life into my old system just for a little longer. What are you running now? I think I will still like a system I could use ECC ram. I know that is not necessary these days, but I would like to still go that route. Hope this makes sense to you. Have a good week!

I have three NAS setups currently. I have a Synology DS1621+ with 32gb of ECC memory, spinning disks, and a 10gb NIC in it, I run a virtualized instance of TrueNAS scale with all SATA SSDs in it. That runs on my Proxmox server which has a Ryzen 5 Pro 5650GE CPU and 64GB of ECC memory, and I run a virtualized instance of OpenMediaVault, on another Proxmox node (N100 machine) as my backup destination for the first two NAS boxes. Synology is where all the important family photos and documents are stored, and where my wife does video editing. My TrueNAS instance is mainly for Proxmox backups, docker volumes, NFS shares for my VMs, and Kubernetes storage.

That is some serious setup you got going there! Definitely impressive. I have so many questions now based on your reply, but I think those questions might be a topic for a different post. In the meantime, I think I am going to research a little more on your original reply and xMAXIMUSx’s recommendations to see where that takes me. Most likely I will come back to this post to give an update on my system’s updates. Thanks for your help! Have a good week.

1 Like

As others already replied: maybe using more modern hardware is the way to go.

However, I wanted to mention another option when using a Supermicro 1 RU server:
You can sometimes add two more SFF drive sleds for 2.5" SSDs above the 4x LFF 3.5" drives. Maybe one of those bays are already occupied by a CD-Rom drive.

I ordered mines from here:

I’m sure you can find those parts by their part numbers from other places.

Additionally you can also use SATA DOMs from Supermicro as boot disks. Some may argue that those will frequently fail, but I had no issues in the past 3 years or so… (but I’m not using swap on those drives as well, so maybe that’s the trick).

With these addons and an additional PCIe card I run my Proxmox cluster on 3x Supermicro 1 RU X10 servers with 2x 18 TB HDD, 3x 4 TB SSD, 1x 2 TB SSD, 2x 2 TB NVMe and 2x 128 GB SATA DOMs in each node. So, I think I’ve maxed out those servers pretty well…

But coming back to the recommendation of new mainboard/CPU: at least a newer mainboard with you also provide with an HTML5-based IPMI, which is real benefit over the old crappy Java-based IPMI console of the X9 servers.

Funny, I don’t think of it as being all that impressive. I don’t have more than a couple of TB of data. Its mostly what I put together just experimenting in my home lab and buying some used enterprise SSDs off of ebay. I see some of these people with 45 drives HL-15 machines (or other rack mounted monsters), and think that is super impressive