True NAS Scale 25.04.2.4
Initially we started the system with 4 Nos 20TB disks in raidz2 configuration.
Only 1 single pool is created
As the pool was about to get full, to increase the pool size we added additional 5 disks to the pool
They were added using the extend vdev method
Once 1 disk was added and volume expansion was completed, next disk was added
We are not able to get the full usable space after expansion.
Instead of expected 120TiB the GUI shows only 79.42TiB.
When we try to copy the data, it does not allow to copy beyond that.
All the 9 disks are online.
zpool list gives following
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Data 164T 128T 35.4T - - 7% 78% 1.00x ONLINE /mnt
boot-pool 232G 3.80G 228G - - 9% 1% 1.00x ONLINE -
zpool status given following output
pool: Data
state: ONLINE
scan: scrub canceled on Thu Oct 23 10:35:34 2025
expand: expanded raidz2-0 copied 55.3T in 23:40:16, on Wed Oct 8 21:31:21 2025
config:
NAME STATE READ WRITE CKSUM
Data ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c4d7e7d6-8c89-4031-82d3-d334bf0ef9b1 ONLINE 0 0 0
71e3fdbf-9af8-42f5-9e37-6d38a72d3ff6 ONLINE 0 0 0
55fe3a3e-77f3-444f-b001-988ced60efe6 ONLINE 0 0 0
a0f1cab2-1048-4acc-bc2e-b7c48ef3db45 ONLINE 0 0 0
e2107252-6f8f-4dbd-8658-13145ee06228 ONLINE 0 0 0
f3844f44-8592-4258-b841-2b3b1b653a3a ONLINE 0 0 0
ea868532-805b-4f4f-86bc-5ce6441af5cf ONLINE 0 0 0
fe5b5998-5a81-4fea-aaec-2b555e1e1765 ONLINE 0 0 0
c649c21a-4e2d-4e91-ab0a-e106c116852b ONLINE 0 0 0
errors: No known data errors
pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:10 with 0 errors on Wed Oct 29 03:45:12 2025
config:
NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdk3 ONLINE 0 0 0
sdi3 ONLINE 0 0 0
errors: No known data errors
Storage dashboard gives 79.42 TiB as usable space
Requesting some one to provide a resolution for the same