TrueNAS Scale Pool Configuration

I have a second Truenas Scale machine I’m setting up It has 10 4tb drives and two NVMe’s for the OS.
I’ve read all the stuff on pool layouts and find it rather confusing, at least for me. FYI:this is in a home-lab.

I don’t mind giving up some storage space in exchange for some performance gains.

The two TrueNAS machines will be backing up locally to each other.

Can someone give me a suggested layout for a balance of storage and performance?

Thanks

I setup my nas with one share (//nas/share) and my second nas (//nas2/share2).

/usr/bin/rsync -a --delete //nas/share/ //nas2/share2

up to you if you want many pools and subsets. Truenas has pretty good documentation.

Thanks pavlos,

I know how to share between the two NAS’s.

What I’m looking for is what pool layout would give a balanced approach for storage size and performance ?

The age old question of performance VS size/redundancy. If you want the best performance its best to go with raid 10 (Striping mirrors). If you want best redundancy go with RAIDz2. Here is a calc that will give you a rough estimate on the different configuration you want to use

EDIT:
This was the calculator I was trying to post.

Thanks xMAXIMUSx,

I’ll take a look at that.

I’m a Scientist and I absolutely over think EVERYTHING

I’m an infrastructure engineer so I understand completely.

1 Like

If you want the machines to back up to each other I highly recommend using ZFS Replication.

Tom,

I was thinking of using SyncThing,

The thought would be to create two machines that would effectively mirror each other.

If I understand it right in the case that NAS1 went down I could continue using NAS2 without having to
jump through to many hoops while I was repairing NAS1?

Not sure if this applies to Scale, but the old recommendation was 8 drives per group, then stripe those groups together. For your system, I would think 5 drives per group (vdev) and then stripe them to get more speed back. RAIDZ (similar to RAID 5) or RAIDz2 (similar to RAID 6) is what I would use, otherwise you are going to lose a lot of storage. For a lab system, maybe not a big deal and worth playing with different configurations and find the one that works best for what you are doing.

And if you are testing for production, think about what you are going to need or have available to put this into production when you get some/most of the bugs worked out. Budget always seems to get in the way right now, at least where I work.

Thanks Greg_E

I decided to use 12 drives and make two 6 drive vdev’s do you think that would be the best (best is very subjective word) layout?

OS is on NVMe’s and the system has 32Gb off memory.

Do you think I should add any of these (SLOG/ZIL/L2ARC)?

Thanks

Tom has a good video to help you make those decisions :slight_smile:

2 Likes

I have never had the money for good cache drives, so for me no they are not useful. You can get a lot of performance upgrade by installing more ram, it will be used as cache and may be cheaper than the really fast cache drives. The cache drives would be safer if the power goes out suddenly because (I might be wrong here) the cache drive doesn’t empty until the system knows the data in on the main drives. Part of that whole copy on write that we like so much. I think Tom might have done a video on resurrecting the cache drive data on a crashed system, if not him then someone else because I vaguely remember it.

Yes, from what I’ve read the biggest performance gain I could do is by adding more RAM before I do any thing else. Unfortunately it has 4 8G ram sticks in it so I can’t just add some more memory I’ll have to remove and replace which will cost more, I may have to wait a bit for that.

If it is old enough to take DDR3 RDIMM, I was surprised at how cheap I could get RAM, grabbed 16x8gb for $45 shipped very recently, decent Samsung branded stuff. 16GB modules were not a lot more on some of the listings on ebay.