A bit lost on how to proceed…
I got home from a brief holiday and power to my XCP-NG server was off. Once I turned things back on, all of the VHDs that are hosted on a ZFS storage repository were disconnected from their VMs. As such, I have 14 VHDs located on the zfs storage repository that are all named “(No Name)” and need to re-map them to their respective VMs (8 different ones).
Is there a configuration file I could look up to see if the mapping still exists? Perhaps this association is provided in some log files - which one?
Other than creating a new ubuntu VM and manually connecting/mounting each VHD and seeing what’s in each, is there another way I could peak into the VHDs to see their contents so that I could identify to which VM the disk is associated to? I do have Ubuntu and Windows VMs.
If it’s a manual process, is there a way to mount the VHD as a read-only device so that I don’t loose anything by connecting to a different VM?
It happened to me with a ZFS Storage too.
I managed to atach the vms to the vhd guessing each one by the size.
Lost power again, lost the vhm again. I restored the vms from the backup, to the Ext Local Storage.
Im not using ZFS now, never happened to me when using EXT before
If the pool meta data get’s corrupted and you don’t have a back up then it is a manual process.
Is this an issue with unclean shutdown? I’ve had my test system up and down several times (cleanly) and no issues, but never dumped the power on the whole thing to see what would happen. Haven’t been able to get back to testing for many months now so not a lot of experience to go on.
Unclean shutdown: yes, definitely: power to the whole building was turned off for some time without much consideration to any computers or clean shutdowns… There was no UPS on the server (something that will change soon enough)
How I proceeded (still unsure if I recovered fully…):
- Take a random VM with enough CPU/RAM
- For each VHD:
- Attach the VHD to the selected VM. (Must be read/write for being able to boot)
- Boot machine
- If it boots fine, identify which machine this disk is related to
- Once booted up, Identify which snapshot (if applicable) this disk is related to (check for installed dates/update dates/sizes/data/etc to figure it out)
- shutdown VM
- Name disk with something relevant to identify what/when it is…
- Detach disk from Test VM
- Attach disk to correct VM
- Once all done with all VHDs, figure something out with those VHDs that were not bootable. (have 1)
- Do a backup of the pool meta data…
Using this method, I was able to identify 13 of the 14 drives to the 8 VMs (1 had 5 snapshots).
I still have to figure out that remaining one and how to do a backup of the pool meta data. Tom’s videos probably cover a good portion of the backup piece…