FreeNAS / TrueNAS ZFS Pools RAIDZ RAIDZ2 RAIDZ3 Capacity, Integrity, and Performance

When setting up ZFS pools performance, capacity and data integrity all have to be balanced based on your needs and budget. It’s not an easy decision to make so I wanted to post some references here to help you make a more informed decision.

                       capacity
                          /\
                         /  \
                        /    \
                       /      \ 
          performance /________\ integrity

These are the metrics to consider when deciding how to layout your drives in FreeNAS.

  • Read I/O operations per second (IOPS)
  • Write IOPS
  • Streaming read speed
  • Streaming write speed
  • Storage space efficiency (usable capacity after parity versus total raw capacity)
  • Fault tolerance (maximum number of drives that can fail before data loss)

Do you need more storage? more speed? more fault tolerance? How you lay them out will cause dramatic differences in in these numbers which is why deciding this is the first step in your plan.

In ZFS, drives are logically grouped together into one or more vdevs. Each vdev can combine physical drives in a number of different RAIDZ configurations. If you have multiple vdevs, the pool data is striped across all the vdevs.

Here are the basics for calculating RAIDZ performance, the terms parity disks and data disks refer to the parity level (1 for Z1, 2 for Z2, and 3 for Z3; we’ll call the parity level p ) and vdev width (the number of disks in the vdev, which we’ll call N ) minus p . The effective storage space in a RAIDZ vdev is equal to the capacity of a single disk times the number of data disks in the vdev. If you’re using mismatched disk sizes, it’s the size of the smallest disk times the number of data disks. Fault tolerance per vdev is equal to the parity level of that vdev.


TL;DR: Choose a RAID-Z type based on your IOPS needs and the amount of space you are willing to devote to parity information. If you need more IOPS, use fewer disks per stripe. If you need more usable space, use more disks per stripe.

For those of you that want to dive deeper into this topic, here are some great write ups that really helped me understand this much better.

Six Metrics for Measuring ZFS Pool Performance Part 1

Six Metrics for Measuring ZFS Pool Performance Part 2

ZFS STORAGE POOL LAYOUT PDF
https://static.ixsystems.co/uploads/2018/10/ZFS_Storage_Pool_Layout_White_Paper_WEB.pdf

The ZFS ZIL and SLOG Demystified

ZFS RAIDZ stripe width, or: How I Learned to Stop Worrying and Love RAIDZ

ZFS Raidz Performance, Capacity and Integrity
https://calomel.org/zfs_raid_speed_capacity.html

ZFS Record Sizes for Different Workloads
https://jrs-s.net/2019/04/03/on-zfs-recordsize/

Excellent article about the cache vdev or L2ARC

Synchronous vs Asynchronous Writes and the ZIL

Choosing The Right ZFS Pool Layout

Nice article from ARS Technica comparing ZFS to the more traditional RAID6 & RAID10 setups

5 Likes

Hi thanks for your video(s), love them!

Hopfully you can advice me on something VDEV related :slight_smile:

I run a small film production studio and currently we’ve been using a couple Synology NAS systems (5, 8 and 12 bay units all running RAID 5 + backups). We currently have roughly about 244TB’s (x2) of video material.

The main function of the Synology’s at the moment is file sharing / backups which work completely fine. But I want to switch to something more powerful like unRaid / FreeNAS for these reasons:

  • Expansion options
  • More power —> CPU and graphics pass tough (that’s why I think unRaid is a good option)
  • Virtualization / renders system (thinking about Ryzen or even Epic processor + beefy GPU)
  • Faster network speeds, (I currently have 10gbe, which for now is good)
  • more spinning disks / more data / more bandwidth (right?!)

My question; let’s say I invest in a 24 bay (like a net app DS4224). What is the best file system if you want to gradually expand your data 1 or 2 disks at a time, given the fact I can’t afford to buy 24*14TB (2x including backup!) drives at this point.

I would love to take a closer look at FreeNAS or unRaid + ZFS (with 1 or 2 disk redundancy) but I understand you can’t put more disks in a VDEV that already exists, so in order to expand your capacity you always need minimal 3 disks (right?!)…

Hope this makes sense :slight_smile:

I prefer ZFS but you are correct about the issue of just wanting to expand as needed. I don’t use unRaid but as I understand it does offer good support for passthrough and virtualization. I also think unRaid will let you expand as need https://wiki.unraid.net/Add_One_or_More_New_Data_Drives so that might be what works best for you.

Thanks for the quick reply!

  • Let’s say I want to add another VDEV (in Z1 or Z2) you need minimal 3-4 disks right (like RAID5 / 6)
  • In the triangle (you sort of said this in your video also) should be one factor added: ‘costs’ :slight_smile:

Basically data is an expensive ‘hobby’ haha…

Cost is affected by all three factors. I would saw draw a circle around the triangle and the closer you get to any of the points on the triangle, the higher the cost. :slight_smile:

1 Like

If you look at part 2 of iXsystems blog post above they mention adding VDEVs to a pool and reliability:

So, you can indeed increase your pool over time by adding VDEVs by a minimum of 2 disks for mirrors (RAID1), 3 disks for RAIDZ (RAID5) or 4 disk for RAIDZ2 (RAID6).

I thought this article from iXsystems would be a good addition to the list of articles above:

I have a question about ZFS I bought a 24 bay server and 24 8TB HDD’s i am using plex… So question is what is best setup so i can access all the space without having to split up files.

example obvious 24 in raidz2 would be slow right?

Thanks

I think I remember reading somewhere that you don’t want more than 12 disks per vdev. So you could do 2 x12 disk raidz2 vdevs or 3 x8 disk vdevs. You would loose sever disks to redundancy that way.
Personally, I don’t like backing Plex with ZFS because of all the space lost to redundancy. It would be a pain if I lost my Plex library, but it’s nothing I can’t get back. I’m using DrivePool on a Windows server as a SMB share. All the drives are pooled into a software JBOD.

Capacity: 80TB
RAIDZ2 with 12 8TB disks has a a total capacity of 80TB. So ill loose 2 drives only which i am loosing with 6 x 8TB anyways right? Any performance hit vs 6 disk to 12 disk vdev?

Correct, you will loose about 2 disks work of storage with a raidz2 regardless of total disks. This is a handy calculator for ZFS pools. It shows that you would have 51.371429 TiB of practical storage with 12 x8TB disks after formatting and accounting for the 20% free space recommendation.
https://wintelguy.com/zfs-calc.pl

Take a look at this article for relative performance differences of various drive configurations.
https://calomel.org/zfs_raid_speed_capacity.html

Here is some information on the new pool type and yes, I will have to make some new videos to cover all the new features.

An updated version (2020) of the white paper:

1 Like

I am looking forward to when the publish more in depth testing of the fusion pools. These tests take a lot of time to do and I have not really taken the time yet to do so.

2 Likes

Thank you Lawrence for your very helpful videos!

In my video post-production company I have two nas with TrueNas system.

First of is nas media for fast multi users editing with fiber nic, ecc ram etc and second one for two daily backup.

The backup nas have 10x 6TB WD RED PRO in raid-z2, while media nas have two pools one with 3x 8TB WD ULTRASTAR in raid-z0 and one with 2x 2TB SSD in raid-z0.

Then, when editing is delivered, one copy of the project is located on backup nas and one more copy in hdd offline in a drawer!

I was thinking of adding one more ultrastar disk on first pool of media nas and convert it in raid-z1 but i’m afraid of losing performance.

What do you think of this configuration?

I am not clear what you mean by raid z0 but I think you mean raid 0 and I don’t think that is a good idea, you should at least use raid Z.

If you are worried about losing performance maybe you could assess the impact of increasing the frequency of your backups. At least if you have any problems with your “RAID0” pool you’d lose less work with more frequent backups.

ZFS 101: Leveraging Datasets and Zvols for Better Data Management

How Much Memory Does ZFS Need and Does It Have To Be ECC?

How ZFS snapshots really work And why they perform well (usually) by Matt Ahrens

1 Like