I have an xcp-ng server that I had 3 drives in Zraid. One of the drives if removed, but the other 2 are still showing as part of raid 5. I have attempted to stop the raid, fail the drive to get rid of the md0, but it keeps telling me md0 doesn’t exist. but when I cat /proc/mdstat is shows the md0.
I’m just trying to remove the md0 and move on, but can’t figure this out. Attached are 3 screenshots of what is visible .
Yeah, @LTS_Tom 's Medium link will do the trick, but just to clarify: the mdadm --fail and --remove options are for raid member devices, not mdX devices themselves.
So in your scenario, first assure nothing is mounted or using /dev/md0, such as an LVM Physical Volume, then simply run: