TrueNAS blip reporting drive label

Admitted neob. I had a drive in my TrueNAS box that needed to be replaced. I did so with a like and kind WD 4TB drive. I’ve re-silvered and scrubbed the pool. When running the command ‘zpool status ’ I’m getting the following report:

root@truenas[~]# zpool status data
pool: data
state: ONLINE
scan: scrub repaired 0B in 03:02:02 with 0 errors on Tue Jul 30 09:17:09 2024
config:

    NAME                                      STATE     READ WRITE CKSUM
    data                                      ONLINE       0     0     0
      raidz2-0                                ONLINE       0     0     0
        sdd2                                  ONLINE       0     0     0
        sdg2                                  ONLINE       0     0     0
        sdi2                                  ONLINE       0     0     0
        1e876aba-c845-48bc-8b09-811ee04507b8  ONLINE       0     0     0
        sdj2                                  ONLINE       0     0     0
        sdk2                                  ONLINE       0     0     0
        sdc2                                  ONLINE       0     0     0
        sdb2                                  ONLINE       0     0     0
    logs
      sdf1                                    ONLINE       0     0     0

errors: No known data errors

I’ve done a reboot of the machine after the last scrub and ran the above zpool command after the reboot.

Is the drive that is reporting a system ID rather than the ‘sd??’ label concern me? All other UI reports are showing the drive as sdg.

TIA for any clues.

Besides its unsightliness, it’s no concern at all. ZFS internally uses a globally unique identifier, so what shows here is just for human consumption.

Thanks for the quick response. Appreciated.

Using identifiers, such as from /dev/disk/by-id/ instead of “sd*” is preferred because if the drives ever get processed in a different order, especially if one of your data disks becomes sdf, then the pool mounting could fail.

When ZFS starts and finds the disks for the pool it uses the label header written to each drive from when the pool was built and not it’s physical path to determine where it belongs in the pool.

This is correct and I actually go as far as creating the pool using GPT labels that tell me which bay the disk is in, however, to follow up it’s important to note that even if sd* naming does change, the pool will usually still import because behind the scenes ZFS uses a GUID regardless of how you designate the path to the individual volumes. (You can see these GUIDs with zpool status -g <poolname>—that’s why you’re able to import a zpool created on FreeBSD into Linux, each of which use different naming conventions.)

If someone finds themselves in a position where the pool shows up with a missing vdev member due to a change in the path’s name, it doesn’t mean the pool is toast. Running zpool export <poolname> and then zpool import <poolname> should fix the issue.

I’m a visual learner… I have a spreadsheet with the serial numbers arranged to match the location of the drive bays. The trick is to store the spreadsheet somewhere that isn’t on the same pool it’s documenting :wink: