How is iscsi access coordinated from multiple xcp-ng hosts

This thread Best practice for iscsi storage to multiple xcp-ng hosts (not pooled)? made me curious.

The following will expose my ignorance concerning xcp-ng and FreeNAS.

My understanding is that iscsi is logically equivalent to a shared scsi bus with multiple initiators, and that it provides block level access to a device.

I don’t understand how it would be possible to have multiple host with r/w access to the same device without a cluster aware filesystem. The defunct Digital Equipment Corporation developed a distributed lock manager for VAX/VMS Clusters, which allowed coordinated access to shared devices. Here’s a link to a high level overview of VMS Cluster communications from one of the VMS developers, Andy Goldstein. Specifically what was done to make sure that cluster membership was always consistent so that uncoordinated access to the directly attached storage was prevented, as he describes here.

Does xcp-ng have a cluster aware file system? If not, how can multiple systems safely access the same device with read/write access using iscsi?

Tom has posted many youtube videos with iscsi, and I haven’t watched them all, but I did watch FreeNAS Virtual Machine Storage Performance Comparison using, SLOG/ ZIL Sync Writes with NFS & iSCSI and scanned the three articles referenced, but this appears to be about whether writes are “considered complete” after the write command has been acknowledged (async) or after the data has been written to persistent storage (sync).

I am not looking for a tutorial, that is well beyond the scope of a forum, but if there pointers to where I could find this info. I have used google, and from what I found, it seems to say that you can’t have multiple host with read/write access to iscci storage without using a cluster aware file system. Wikipedia has this Clustered file system does not have anything I saw about xcp-ng but does mention VMware VMFS and RedHat GFS2. Windows clusters take the easy way out, and limit access to a “shared” volume to one host at a time.

If the XCP-NG hosts are in the same pool, they can share NFS or iSCSI storage but they can no both write to the same VM at the same time. Because in this configuration they share a storage pool, when using Xenmotion to transfer a running machine only the RAM of the running machine needs to be moved to the other host and once completed on host stop writing to that VM and the other starts.
I cover this in my XCP-NG HA Video, hope that helps.

Ok, thanks, I will watch your video.