XOA: Trouble Creating a new SR (NFS ISO)

Evening, having trouble creating a NFS ISO,

  1. on my Pop!_OS is where I down load ISO’s, I have KVM working there to create DEV VM’s and my

exports:


/media/OnStorage/Public/Iso/ 10.60.70.0/24(rw,no_root_squash,sync,no_subtree_check,no_all_squash)

fstab:


/dev/disk/by-uuid/bf2c758f-18bd-4064-932f-00e92d114c8e /media/OnStorage auto nosuid,nodev,nofail,x-gvfs-show 0 0

when testing its mount on a VM (Pop!_OS) it works.

  1. I logged into XOA and XCP_ng cli and able to mount them there as well.

but I’m unable to create any type of SR via Web UI, I keep getting these types of error messages:

admin	Cannot read property 'Export' of undefined
admin	SR_DEVICE_IN_USE() This is a XenServer/XCP-ng error
admin	Cannot read property 'Export' of undefined
admin	SR_BACKEND_FAILURE_74(, NFS unmount error [opterr=error is 16], ) This is a XenServer/XCP-ng error
admin	Cannot read property 'Export' of undefined							
admin	SR_BACKEND_FAILURE_74(, NFS unmount error [opterr=error is 16], ) This is a XenServer/XCP-ng error
admin	SR_BACKEND_FAILURE_74(, NFS unmount error [opterr=error is 16], ) This is a XenServer/XCP-ng error
admin	SR_BACKEND_FAILURE_74(, NFS unmount error [opterr=error is 16], ) This is a XenServer/XCP-ng error
admin	Cannot read property 'Export' of undefined	
admin	Cannot read property 'Export' of undefined

Just installed the other day and I’m using the default XOA Deploy. What do I need to check to get this working?

I usually use SMB for ISO but I have had NFS working via TrueNAS, never tried it with any other Linux Distro.

Okay, I successfully created a smb server on a VM (Pop!_OS) for testing and its working. XOA has picked up the Iso folder. :sweat_smile:

P.S. Is there a way to list recursively the sub folders or do I just have to create share per subfolder?

For ISO you don’t use subfolders.

Having too much problems getting this running so I decided to drop this project and go with RHEL9 using Cockpit, and was able to accomplish my goal very efficiently, mounted my NFS ISO folder among other things.

The XOA was too confusing to navigate to the information I wanted to know about what I am doing and what I did. Been using cockpit for a while and the UI is very clear about what your doing and how to find information.

On another note, I believe I’m having problems with XOA because of the server build I’m using, I converted a Desktop to be a development server. This project is to help me understand server infrastructure before investing into a real server build.

dev server is:


OS Details:

  • MB - Z390-A PRO
  • CPU - 8x Intel(R) Core™ i7-9700 CPU @ 3.00GHz
  • 4x Port Intel NIC 82580 Gigabit Network Connection (Ethernet Server Adapter I340-T4)
  • 4x DDR4 32 GB, Dual rank, 2667 MT/s

Drives:

  • SanDisk Ultra 3D NVMe (21283F801549) 932 GiB /dev/nvme0n1 (RHEL9)
  • ST2000DM008-2FR102 (ZFL2KJV6) 1.82 TiB /dev/sda (Free)
  • ST2000DM008-2FR102 (ZFL2KL41) 1.82 TiB /dev/sdb (Free)
  • WDC WDBNCE0010PNC (214263804857) 932 GiB /dev/sdc (Free)
  • WDC WDBNCE0010PNC (214263805204) 932 GiB /dev/sdd (Free)

NFS mounts

My current goal is, I need to create at the minimum of four (4) VMs (3 Web servers, 3 Forum servers), they will be serving users (local / internet). Limited for now. Therefor, with the four extra HDs, I would like to use the 2 ST2000DM008s for the Web servers and the WDCs for the forums.

In using Cockpit or XOA (feature set is far better, will learn how to us later on the server build) is it best to create a RAID, LMV2 volume group or a iSCSI targets to suit this build configuration or any suggestion?

Well, after more reading of the RHEL docs on virtualization and other docs, I’ve decided to use this approach for VMs.

  • VM’s will have there own drives
  • all data files will use the NFS drive(s)

because I’ve notice a common thread of performance issues with all VMs using the same pool/drive. This is what I’ve been doing for a while now, so I will stay with this type of setup. I will use the ST’s for data files, the WDC for two other web servers and the RHEL for the main web server among other things.

This setup is just for development and testing real use cases until I put together the production server(s),

Any feedback of this?

Mirrored drives are not any faster than single drive, we generally are using 3 or more drives in our raid setups and don’t assign single VM’s to a drive, but you can do that.

1 Like