Truenas Scale - vdev mirror to raidz2

Hello,

I have just ordered 2 additional disks (12tb) in order to extend the storage capacity of my TrueNas SCALE.
Currently my storage pool is 1 VDEV of 2 mirrored 12TB disks (the same ones I just ordered).

root@nas[~]# zpool status -v
pool: POOL-HDD
state: ONLINE
scan: scrub repaired 0B in 13:58:51 with 0 errors on Sun Jan 1 13:58:55 2023
config:

    NAME                                      STATE     READ WRITE CKSUM
    POOL-HDD                                  ONLINE       0     0     0
      mirror-0                                ONLINE       0     0     0
        2db29801-061a-4b7e-be25-21706a443b4c  ONLINE       0     0     0
        31aab9b1-9ad8-472c-aa9d-61465a49fa70  ONLINE       0     0     0

So I was thinking of extending this pool with the addition of a 2nd VDEV composed of a mirror of the 2 new disks.

I’ve seen some create a RAIDZ2 Pool by defacing the mirror, then copy the data to end up deleting the mirror pool and integrating the disk into the RAIDZ pool.

It seems a bit risky to me as a procedure for a gain that I have trouble quantifying. What do you think ?

I’ve found this :

2 fake devices:

  • create RAIDZ2 from the 2 new disks and 2 fake devices
  • offline the 2 fake devices
  • RAIDZ2 is now in degraded state (but still usable)
  • zfs send data from my old mirror-based pool to the new RAIDZ2-based pool
  • detach first disk form old mirror and add it to RAIDZ2 (and resilver)
  • destroy mirror and resilver RAIDZ2 again

If you have any opinions or experiences in this specific case of extending a mirror, I would be happy to hear from you.

Thanks !

I wouldn’t do this unless I had a backup of all the data, which you should have anyways because as we all know: Raid is not a backup. :wink:

If you do have a backup, maybe it’s worth a shot…

Advantage of this procedure: If it works, it’s probably faster than restoring the data, and you don’t have to recreate your datasets and shares.

Disadvantage, if it doesn’t work or if you make a mistake in one step, all data is gone and you have to restore from backup, which means it would probably have been faster if you had destroyed the pool and created a new RaidZ from the beginning.

If you don’t have a backup, the safest option is to create an additional mirror, but this means less total storage obviously.

1 Like

Thanks for your response.

Small feedback of the day:

So I inserted the 2 new physical disks into my server and created and associated 2 disks of the same capacity to the TrueNas Scale VM.
I then created a RAIDZ2 pool with the 4 disks (the 2 new physical disks and the 2 virtual ones).
Once the pool was created, I removed, from proxmox, the 2 virtual disks.
The pool has been downgraded, but it is functional.
I am replicating the data (via replication tasks).
And it will only remain to destroy the old pool and replace the 2 virtual disks removed from the RAIDZ2 pool by those of the old mirror pool.

The rest tomorrow ! (there is still a 7TB dataset to replicate before we can proceed to the destruction of the old pool and reintegration of the disks)

Regards

And that’s it! Operation completed.
Everything went as planned.
After the end of the replication tasks, I removed one disk from the mirror pool and integrated it into the RAIDZ2 pool.
And once the rebuild was finished, I destroyed the mirror pool and integrated the last disk.

So I have a 4x12TB RAIDZ2 pool without having to go through a data restoration phase where I would have lost my snapshots for example.

Performance wise, I don’t see any difference with the mirror, as I’m reaching saturation of the Gb link of my ethernet network.

1 Like