Stupid question - xcp-ng host shared folder to be accessed between VMs

My apologies for my stupid question:

Here is what I am trying to do - I am looking to install xcp-ng on a system, and I want to set up a folder that has my files and I want my subsequent VMs to be able to access that shared folder WITHOUT needing to go through the network interface.

Is there a way to do that?

My understanding that for Windows VMs, in order for me to be able to access the host hosted shared folder, I would need to go through the network over SMB to be able to get access to the files (vs. being able to do it locally).

It’s a lot faster for me to be able to go through the local access rather than through SMB as I am planning on setting up a NVMe SSD RAID array.

If I go through SMB via the NIC, then I would only be able to access the data on said shared folder at 1 Gbps vs. the 32 Gbps (or more, roughly) of a PCIe 3.0 x4 NVMe SSD.

​If you can please point me in the right direction as to where I should be looking in regards to how to set something like this up, that would be greatly appreciated.

Thank you.

It’s not really designed in a way where you could use a file share on the XCP-NG system itself. You could load some NAS server on the same XCP-NG host and share the files that way, but the overhead of running SMB sharing is more limited by CPU cycles than by the NVME speed.

Thank you.

Yeah, so the plan for this and the reason why I am asking is that I am trying to consolidate from 5 NAS systems (basically) down to a single system.

A part of the reason is because some of my NAS systems are older, and therefore; do not have the capability to add higher speed networking to it (e.g. 10 GbE) and therefore; one option would have been just to upgrade those NAS systems by itself, but on average, they’re around $2000 for a decent one that can actually handle 10 GbE. (The cheapest 4-bay version that I’ve found which uses an Intel Pentium processor, as a barebones NAS is $1399 on Amazon, and the price goes up from there if you want a better processor, more RAM, and/or more drive bays.)

And then on top of that, I would also need to get a 10 GbE switch (which I don’t have yet) and those run also around $2000 on Newegg (link).

(I currently have a 48-port Netgear gigabit switch, and about half of the ports are populated, hence why I would need to move to at least a 24-port 10 GbE switch.)

So, combined, just to replace ONE NAS system and the 10 GbE switch, will be an initial capex of anywhere between $3400-4000.

And that’s just to move to 10 Gbps.

So the thought for doing everything on a single system, all directly on the host, was “what if I took the network out of the equation altogether?”

A single NVMe PCIe 3.0 x4 SSD is rated for 32 Gbps of bandwidth through the interface. Even if the drive isn’t particularly great at maximizing the usage of that available interface bandwidth and say that conservatively, it gets around 3 GB/s (24 Gbps), that’s still a lot better than 10 Gbps for one thing, and 2) the cost of the server can be as low as ~$1500 for the host system (dual Xeon server).

At that point, it really raises the question “Do I really NEED the network?” if I can just do everything, all at once, locally on the host system?

I understand that normally, if you are a business, and you’re “renting out” VMs (basically), you’d want to keep the VMs totally and completely separate from each other.

But in this case, I am actually looking for the opposite where the host has all of the storage, and I let the host deal and manage that. And the VMs just pull data as it needs to, from the host-pooled storage.

By doing it this way, I can power down the switch (or go back to just using like an 8-port gigabit switch to connect my thin client to the server, and then let the server do and deal with everything else), and I can, in theory, cut my total power consumption across all of my systems and the networking infrastructure that supports it down from ~1.2 kW (which runs 24/7) down to about 600 W.

It’d be faster, and it’s be cheaper, and yes, there is a bit of a capex involved, but the TARR on the power savings alone is expected to be about 30 months (2.5 years, which I know that I’ll run the systems longer than that).

Also, apparently, it looks like that Proxmox VE can do something like this via virtio-p9 so I am looking to testing that out later on tonight with a 64 GB RAM drive (tmpfs) to see if I would be able to share that between VMs but I wasn’t sure if xcp-ng had a similar capability.

(It’s too bad that you can’t take like a 12 Gbps SAS device driver and make it a loopback driver so that you create a “network interface” on that loopback device so that data would never actually have to traverse a physical network interface only for it to loopback on itself.)