XEN Orchestra on Synology with Connection to TrueNAS using NFS Fail

Xen Orchestra, Synology, TrueNAS, NFS … am I hitting most of the fun technologies in one post?

I spent multiple days trying to connect my XEN Orchestra running on a Synology container to a NFS share on my TrueNAS Scale. I was using Orchstra’s “Remotes” option and it always failed with an exit code 32. All products are current production versions as of this writing. I utilized Grok to summarize my bug report to Synology and the report was FANTASTIC at seven full pages, highly detailed. I wish I could share it here but it contains internal information not suitable for public consumption.

Synology escalated it to development and they replied it was a limitation of Docker and that the NFS mount should be made at the Synology NAS level and bound to the container. I can share what Grok thought of that approach, below. I ended up spinning up a Debian VM and loading XEN Orchestra which works fine, I just wanted others to not waste their time trying to make this work on Synology. The report below is beautiful in it’s PDF format and rather ugly here, sorry about the lack of formatting.

Tom, if you are reading this I would gladly share the full bug report if you are curious, feel free to reach me privately, it’s actually a very interesting read.

My Thoughts on Synology’s Proposed Solution 
Feasibility and Potential Benefits 
1. Bypasses Docker’s NFS Mount Issue: 
o Synology’s solution avoids having the container perform the NFS mount directly, which 
is where the issue lies (NFS requests not reaching the TrueNAS server). Instead, the 
Synology host handles the NFS mount, and the container simply accesses the already-
mounted directory via a bind mount. 
o This approach leverages the host’s NFS client, which might be more reliable for 
establishing the initial mount, even with the outdated nfs-utils 2.3.1. 
2. Simplifies Container Configuration: 
o By mounting the NFS share on the host, Xen Orchestra no longer needs to perform NFS 
mounts itself. This eliminates the need for useSudo = true in the XO configuration, as 
the container doesn’t need to execute mount commands. 


o It also avoids the sudo: unable to resolve host ronivay-xen-orchestra issue we 
encountered, since sudo is no longer required inside the container. 
3. Potentially Resolves the minorversion=1 Issue: 
o If the Synology host can successfully mount the NFS share with vers=4 (or 
vers=4,minorversion=0), the minorversion=1 issue might not affect the host-level 
mount. We saw that manual mounts on the Synology host sometimes succeeded briefly 
(e.g., TrueNAS logs showed a successful mount at Apr 04 08:50:02), even though the 
container failed to maintain the mount. 
o If the host-level mount works, the container can use the mounted directory without 
encountering the NFS client issues. 
4. Aligns with Docker Best Practices: 
o Mounting network filesystems (like NFS) directly inside a container is often discouraged 
due to potential compatibility issues between the container’s runtime and the host’s 
kernel. Synology’s solution follows a common Docker best practice: mount the 
filesystem on the host and bind it into the container. 
Limitations and Concerns 
1. Doesn’t Address the Root Cause: 
o Synology attributes the issue to a “Docker limitation,” but this feels like a deflection. The 
root cause appears to be the Synology host’s NFS client (nfs-utils 2.3.1), which we 
identified as the source of the minorversion=1 errors and silent failures. The container’s 
inability to mount NFS shares is a symptom of the host’s NFS client issues, not a Docker-
specific problem. 
o Other systems (e.g., the Debian client at 192.168.14.43 and the Debian VM) running 
Docker containers can mount NFS shares without issue, suggesting that Docker itself is 
not the problem—it’s the Synology host’s NFS implementation. 
2. Host-Level Mount May Still Fail: 
o We previously attempted manual mounts on the Synology host, and while some 
succeeded briefly (e.g., TrueNAS logs showed a mount at Apr 04 08:50:02), they often 
failed to persist or failed silently. If the host-level mount fails due to the same nfs-utils 
issues (e.g., minorversion=1 or silent failures), this solution won’t work. 
o Synology’s testing confirmed that the container can communicate with the TrueNAS 
server, but they didn’t confirm whether the host can reliably mount the NFS share. This 
is a critical step that needs to be tested. 
3. Increased Complexity for Xen Orchestra: 
o Xen Orchestra is designed to manage its own NFS mounts via the GUI (under Settings > 
Remotes). By mounting the NFS share on the host and binding it to the container, XO 
loses the ability to manage the mount directly. This means: 


 You’d need to configure XO to use a local path (/run/xo-
server/mounts/1cd285c7-6398-41c7-b2c5-005d7a242971) instead of an NFS 
path, which might not align with XO’s backup workflow. 
 If the host-level mount fails or unmounts unexpectedly, XO won’t be able to 
remount it automatically, leading to backup failures. 
o This approach shifts the responsibility of maintaining the NFS mount to the Synology 
host, which may not be ideal for a production environment. 
4. Scalability and Portability Issues: 
o Binding the NFS mount to a specific path on the host makes the container less portable. 
If you move the container to another host, you’d need to ensure the new host has the 
same NFS mount configured at the same path. 
o If you run multiple containers that need access to different NFS shares, you’d need to 
manage multiple host-level mounts, which could become cumbersome. 
5. Security Considerations: 
o Mounting the NFS share on the host and binding it to the container exposes the 
mounted directory to the host’s filesystem. If the Synology host is compromised, the 
NFS share’s data could be accessed more easily than if the mount were isolated to the 
container. 
o You’d need to ensure proper permissions on the host-level mount point (e.g., 
/volume1/nfs/xcp-ng) to prevent unauthorized access by other processes or users on 
the Synology NAS. 
Comparison with Your Current Solution (Debian VM) 
 Your Debian VM Solution: 
o Pros: 
 Fully resolves the issue by using a modern NFS client (likely nfs-utils 2.6.x or 
later) on a Debian system, avoiding Synology’s outdated nfs-utils 2.3.1. 
 Allows Xen Orchestra to manage NFS mounts directly, as intended, without 
relying on host-level mounts. 
 Provides a persistent environment where changes (e.g., package updates, 
configuration files) are not lost on restart. 
 Isolates the application from Synology’s kernel and NFS client limitations. 
o Cons: 
 Requires more resources (e.g., 2 GB RAM, 10 GB disk) compared to a container. 


 Involves more setup effort (e.g., creating a VM, installing Debian, running the 
xenorchestra_installer script). 
 Synology’s Proposed Solution (Host-Level Mount + Bind to Container): 
o Pros: 
 Potentially allows you to continue using a container on the Synology NAS, which 
is more lightweight than a VM. 
 Bypasses the container’s need to perform NFS mounts directly, which is where 
the issue occurs. 
o Cons: 
 Doesn’t address the underlying issue with nfs-utils 2.3.1, so the host-level 
mount may still fail. 
 Shifts the responsibility of managing the NFS mount to the host, which may not 
be reliable given the observed issues. 
 Reduces Xen Orchestra’s ability to manage mounts dynamically, potentially 
leading to operational challenges. 
 Adds complexity to the container setup (e.g., managing host-level mounts, 
updating XO configuration to use a local path). 
My Overall Assessment 
Synology’s proposed solution is a reasonable workaround in theory, as it aligns with Docker best 
practices for handling network filesystems. By mounting the NFS share on the host and binding it to the 
container, it avoids the container’s direct interaction with the NFS client, which is where the issue lies. 
However, I have several concerns: 
1. Doesn’t Solve the Root Cause: 
o The core issue is the Synology host’s outdated nfs-utils 2.3.1, which causes 
minorversion=1 errors and silent failures. Synology’s solution doesn’t address this—it 
simply works around it by shifting the mount responsibility to the host. If the host-level 
mount fails (as we’ve seen in some of our tests), the problem persists. 
2. Potential Reliability Issues: 
o We’ve already seen that manual mounts on the Synology host can fail silently or 
unmount unexpectedly. If the host-level mount fails, Xen Orchestra will lose access to 
the backup storage, and XO won’t be able to remount it automatically. 
3. Not Ideal for Xen Orchestra’s Workflow: 
o Xen Orchestra is designed to manage NFS mounts dynamically via its GUI. Synology’s 
solution requires a static host-level mount, which doesn’t align with XO’s intended 


functionality. This could lead to operational challenges, especially if you need to manage 
multiple NFS remotes or if the mount needs to be adjusted frequently. 
4. Synology’s Attribution to Docker: 
o I disagree with Synology’s claim that this is a “Docker limitation.” Docker containers can 
mount NFS shares successfully on other systems (e.g., the Debian client at 
192.168.14.43 and the Debian VM), as long as the host’s NFS client is reliable. The issue 
is specific to Synology’s NFS client implementation, not Docker itself. Synology should 
focus on updating nfs-utils and fixing the silent failures, rather than attributing the 
problem to Docker. 
Given that you already have a working solution with the Debian VM, I think you made the right decision 
to stick with it. The Debian VM provides a more reliable and flexible environment for Xen Orchestra, 
allowing XO to manage NFS mounts as intended and avoiding the Synology host’s NFS client issues 
entirely. Synology’s proposed solution might work for some use cases, but it introduces new 
complexities and doesn’t address the underlying problem, making it less ideal for your scenario. 

I use the Synology Docker / Container system very little but still interesting.