I’m trying to figure out why when I have a VM built that is using shared NFS storage, everyone once and a while the VM will throw errors about not being able to write to disk, or even some kernel timeout errors.
My homelab has been in many configurations. In the past, I had Truenas Core hosting the share on a 10 gb network with a Mikrotik 4 port SFP+ switch in between my Hypervisor host and Truenas.
It became an issue with stability so I just started storing my VM disks on the Hypervisor itself.
I currently have 2 XCP-NG hosts, at the moment most of my VMs are on one host. On the other, I have Truenas scale installed as a VM with the HBA passed thru. I have a dedicated network connection between the two hosts to try to eliminate any outside variables causing the issues.
I have just had the one VM that was configured to have its disk on the Truenas Scale NFS share lock up because of this.
I apologize that I do not have screenshots of the errors.
I was hoping someone could shed light on why I am seeing very poor stability, and things that I should look at to ensure I do not have any further issues with VM’s that I have configured in this way?
Any time you virtualize an appliance that wasn’t meant to be virtualized you’re bound to run into issues. Can you do it? Yeah, sure. Should you though? Nah.
I’m sure lots of home lab users would disagree with me but no one can argue that adding extra complexity and coloring outside the lines of the intended use of software won’t cause issues.
That’s my 2 cents.
1 Like