We have decided to create a new Lab setup using XCP-NG. I have watched the many videos detailing the installation and setup process. It is all very exciting!
I have a question about migrating our existing lab vm’s currently running in esxi v 5 for the hypervisor with Nexenta Server providing the vmdk and associated vm files in NFS datastores. What is the best way to migrate our existing vm’s?
(we also have a number of vmdk’s within the NFS datastore that were previously mounted within running virtual machines that we need to mount to consolidate within the new vm’s, they are not OS disks, just storage - so we just need to access the contents to move those files into new locations within the new vm’s).
we can access the vm’s either via esxi OR directly from the Nexenta NFS datastores
I have read about Xen conversion tools, but they no longer appear to be available.
the vm’s do not have to be running, and we are able to accomodate downtime.
we have not been running the full licenced vmware suite because we did not elect to maintain their increasingly costly licensing
Our new network/server setup will be:
FreeNas running on server hardware with zfs, and exposing NFS storage
2 Physical hypervisor servers running XCP-NG each with local storage for vm’s
Can anyone suggest the best path to accomplish the esxi migration?
It should also be possible to use the VM converter from Microsoft to convert the VMDK to VHD(x). I did this several years ago when migrating from VMware to Hyper-V. Worked very well. Most recently converted two of those VMs from VHDX to VHD to migrate to XCP-ng.
I must be doing something wrong, I couldn’t get a VHD or VHDx created with disk2vhd to import into XCP-NG. I haven’t tried in almost a year because the one I really need in an XP image and that is no longer supported in current XCP-NG. Running it in a Virtualbox on a win10 client for right now (old software driven).
I should probably give the disk2vhd another try and see if I can figure out what I did wrong in XCP-NG.
Thanks for all the input so far, what a great and responsive forum!
The clonezilla source target boot vm option is worth trying. I was hoping that I could find a solution that accessed the nfs stores directly and thereby decrease the dependency of working through 2 hypervisors as well as the vm’s contained. Perhaps conversion depends on not only accessing the virtual disks but also the underlying vm configuration support files. Ideally I can find software that can work directly with the NFS stores in the same manner as the hypervisor employs to mount the virtual disks so that I could do an NFS to NFS transfer. (I am thinking that being able to do this ongoing as a process for all my managed vm’s would be more efficient with less effort since the NFS stores expose the virtual disks already).
In the meantime I will complete the clonezilla approach and report back with results.
Reporting back with results - there were several steps involved:
Build XCP-NG server
Built TrueNas Server
These went fairly well - Tom’s videos helped alot! TrueNas was fairly straight forward, the XCP-NG took an extra step since after building it I wanted to administer the full featured compiled XO, but because I wasn’t yet familiar with the setup, I needed to create a VM for XO so I downloaded and installed the Windows based XCP-NG Center. I then deployed the compiled from source using the automated script from another of Tom’s videos. Everything worked well, and I have a running XCP-NG and XO running on it to manage things. The only issue I ran into was that the XO process installed that vm onto the default LVM host storage. I can live with this for now, although I am tempted to move it later.
I then added 2 drives for local vm storage and mirrored them using the approach outlined in my other thread:
XCP-NG is running well with XO to manage.
I then setup the clone process with clonezilla - should be simple, however I ran into a problem with templates: the source esxi vm is running windows 7 (which is fine because it is isolated and can be updated to 10 later) - but there is no template for windows 7 build in XO. I chose windows 8.1 - but… clonezilla booted, but after selecting the initial clonezilla default boot the console screen displayed all garbled. The problem seemed to be related to the default template setting to boot uefi, or perhaps some other windows environment template setting interfered with clonezilla. The solution for me was to create a vm in xo using an Ubuntu 20 template and then booting clonezilla. I was able to clone the esxi system over the network with no problems.
I DID NOT boot the cloned vm, yet, I was concerned that the template for Ubuntu might have other settings with which I was not familiar and would break and cause issues with the vdi. Instead I detached the disk using XO, then created a new vm using a windows 10 template and removed the default disk, then attached the cloned vdi to the new win 10 vm.
The machine booted right away! I was expecting some little quirk, but it ran and worked fine. The only issue was some unrecognized hardware references from the esxi machine (2 unknown devices and one SCSI device driver issue - I had the esxi configured to run the disks as scsi rather than ide so this was not unexpected). I am not sure how XCP-NG managed to ignore the changed scsi drive, but it did and Windows is running fine on the new hypervisor!
My initial experience coming from a vSphere/Esxi world is that XO does not appear to offer the same level of granularity in terms of provisioning virtual hardware as esxi, but nonetheless it seems to handle everything well!
Even easier is to go to one of the local server and open a terminal (or SSH in) and follow the instructions in this section. It will download and install the open source XO Appliance directly into XCP-NG: