My testing environment is on i5 4590 with 32GB of RAM and Sata3 SSD Drive.
I’m using all local storage for VM’s on the SSD drive. What transfer speed i should expect between VM’s through the Virtual Network of i single solid zip file ?
I’m getting like 180MB/s. the Virtual adapter drivers are appropriate Xen Server PV Drivers etc… with 100Gbs virtual link speed, so i’m assuming my installation is ok (win10 pro)
Is this limit coming from the system itself, the solid state drive read/write speeds?
Will it help if i use PCIex NVME Drive ?
Network speed between VM’s on the same host will vary based on drivers for the OS on the VM and the back plane of the server running it. My Dell R720 lab server gets 14.4 Gbits/sec between Debian Linux VM’s when testing with iperf3.
Test the vm to vm transfer speed with iperf3 as Tom said as this will give you the maximum transfer speed.
Then test your read / write speeds on each VM,
Then the read / write across VM’s ( I use dd for testing read/write but many consider this a fairly blunt tool for the job).
This will give you a set of figures that should start to allow you to work out what is working well / not so well.
Is there any other limiting factors for speeds between VMs? I also have a R720 with dual Xeon E5-2670 and 192GB DDR3 running at 1333 MHz. A fresh install of XCP-ng 8.2.1 is on the R720.
However, I can only get 5.5Gbits/sec Debian bullseye VM to XCP-ng as shown below. The VM is on a local NVMe storage.
The speed limits between VM’s are some factor of how fast the CPU / Memory / Motherboard can transport data. My new Ryzen systems are faster than my older Dell R720 for inter VM speed tests.