I have a Ubuntu server running 20.04.x, primarily using it for Plex and some other back workloads. There is a bunch of internal NAS drives which are in a zfs pool, all been working just fine until a drive had an issue, which I sorted. But interestingly, when I issue the ‘pool status’ command, the response with regards to the ‘disk labels’ is not what I expected or have seen previously, historically, the drives have had a label of sda, sdb, sdc etc etc, but now, it look like below.
scan: resilvered 4.13T in 0 days 13:12:43 with 0 errors on Sun Mar 20 07:35:17 2022
NAME STATE READ WRITE CKSUM Data_Pool_One ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 scsi-35000c500c5fc3258 ONLINE 0 0 0 wwn-0x5000c500c5a6d285-part2 ONLINE 0 0 0 wwn-0x5000c500c5fca44b-part2 ONLINE 0 0 0 wwn-0x5000c500c5fc9fc4-part2 ONLINE 0 0 0 wwn-0x5000c500c5a72751-part2 ONLINE 0 0 0 logs wwn-0x5002538e3036211b-part1 ONLINE 0 0 0 cache sdb1 ONLINE 0 0 0
I have to say it was a choir adding a new drive and trying to find out what the correct name was in order to replace the faulty one… I see that mainly the WWN is used to reflect these, what happened to the usual names ???