Truenas Scale improper expansion after adding disk

True NAS Scale 25.04.2.4

Initially we started the system with 4 Nos 20TB disks in raidz2 configuration.

Only 1 single pool is created

As the pool was about to get full, to increase the pool size we added additional 5 disks to the pool

They were added using the extend vdev method

Once 1 disk was added and volume expansion was completed, next disk was added

We are not able to get the full usable space after expansion.

Instead of expected 120TiB the GUI shows only 79.42TiB.

When we try to copy the data, it does not allow to copy beyond that.

All the 9 disks are online.

zpool list gives following

NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Data 164T 128T 35.4T - - 7% 78% 1.00x ONLINE /mnt
boot-pool 232G 3.80G 228G - - 9% 1% 1.00x ONLINE -

zpool status given following output

pool: Data
state: ONLINE
scan: scrub canceled on Thu Oct 23 10:35:34 2025
expand: expanded raidz2-0 copied 55.3T in 23:40:16, on Wed Oct 8 21:31:21 2025
config:


    NAME                                      STATE     READ WRITE CKSUM
    Data                                      ONLINE       0     0     0
      raidz2-0                                ONLINE       0     0     0
        c4d7e7d6-8c89-4031-82d3-d334bf0ef9b1  ONLINE       0     0     0
        71e3fdbf-9af8-42f5-9e37-6d38a72d3ff6  ONLINE       0     0     0
        55fe3a3e-77f3-444f-b001-988ced60efe6  ONLINE       0     0     0
        a0f1cab2-1048-4acc-bc2e-b7c48ef3db45  ONLINE       0     0     0
        e2107252-6f8f-4dbd-8658-13145ee06228  ONLINE       0     0     0
        f3844f44-8592-4258-b841-2b3b1b653a3a  ONLINE       0     0     0
        ea868532-805b-4f4f-86bc-5ce6441af5cf  ONLINE       0     0     0
        fe5b5998-5a81-4fea-aaec-2b555e1e1765  ONLINE       0     0     0
        c649c21a-4e2d-4e91-ab0a-e106c116852b  ONLINE       0     0     0

errors: No known data errors

pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:10 with 0 errors on Wed Oct 29 03:45:12 2025
config:


    NAME        STATE     READ WRITE CKSUM
    boot-pool   ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        sdk3    ONLINE       0     0     0
        sdi3    ONLINE       0     0     0

errors: No known data errors

Storage dashboard gives 79.42 TiB as usable space

Requesting some one to provide a resolution for the same

What does the output show when you run
zpool list -v Data

NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Data 164T 134T 29.5T - - 8% 81% 1.00x ONLINE /mnt
raidz2-0 164T 134T 29.5T - - 8% 82.0% - ONLINE
c4d7e7d6-8c89-4031-82d3-d334bf0ef9b1 18.2T - - - - - - - ONLINE
71e3fdbf-9af8-42f5-9e37-6d38a72d3ff6 18.2T - - - - - - - ONLINE
55fe3a3e-77f3-444f-b001-988ced60efe6 18.2T - - - - - - - ONLINE
a0f1cab2-1048-4acc-bc2e-b7c48ef3db45 18.2T - - - - - - - ONLINE
e2107252-6f8f-4dbd-8658-13145ee06228 18.2T - - - - - - - ONLINE
f3844f44-8592-4258-b841-2b3b1b653a3a 18.2T - - - - - - - ONLINE
ea868532-805b-4f4f-86bc-5ce6441af5cf 18.2T - - - - - - - ONLINE
fe5b5998-5a81-4fea-aaec-2b555e1e1765 18.2T - - - - - - - ONLINE
c649c21a-4e2d-4e91-ab0a-e106c116852b 18.2T - - - - - - - ONLINE

It appears to be showing the full amount from the command line, not sure why the GUI does not show that, I would try rebooting, if that does not fix it I would open a ticket.

Where to open the ticket ?

Behaviour is also strange.

If we delete 1TB data, in gui 1 TB is increased but in command line 2TB increases

You might first start with a post in their forums

Already did that

To further investigate this issue,

I have prepared a test setup with 11 Nos 16 TB ( 14.55 Tib ) drives

Following are the observation

  1. Pool created directly with 11 drives in Raid Z2 configuration. Usable space as per gui, 99.62 Tib

  2. Pool created with 4 drives in Raid Z2 configuration. Usable space as per GUI 28.06 Tib. No data is copied on to the pool

  3. Adding 1 drive ( Total 5 drives ) gives usable space as per GUI 35.11 Tib

  4. Adding 1 drive ( Total 6 drives ) gives usable space as per GUI 42.16 Tib

  5. Adding 1 drive ( Total 7 drives ) gives usable space as per GUI 49.21 Tib

  6. Adding 1 drive ( Total 8 drives ) gives usable space as per GUI 56.25 Tib

  7. Adding 1 drive ( Total 9 drives ) gives usable space as per GUI 63.31 Tib
    2nd Trial

    1. Pool created with 6 drives in Raid Z2 configuration. Usable space as per GUI 58.02 Tib. No data is copied on to the pool
  8. Adding 1 drive ( Total 7 drives ) gives usable space as per GUI 67.7 Tib

  9. Adding 1 drive ( Total 8 drives ) gives usable space as per GUI 77.39 Tib

  10. Adding 1 drive ( Total 9 drives ) gives usable space as per GUI 87.09 Tib

Seems it is remembering initial parity ratio and applying same to other drives

I just extended my pool as well and have the exact same issue. I am also using 16TB (14.55TiB) drives in raidz2.

Originally I had a 5 drive setup which I believe gave me around 42TB usable. I added one more drive and that gave me 51.54TiB - so around 9TiB more. zpool list -v shows the full amount so I think it is a GUI thing.

I also had to do a full zfs rewrite and delete old snapshots to rebalance and reclaim old blocks (very time-consuming and painful, I reached 93.2% used space during the rewrite)