My preference would be to assess whether it’s worth starting fresh and must migrating the config and data or going to the trouble of migrating instances:
Services in source Linux ESXi VMs would perform better as Incus system containers, but not everything is suitable in this machine format due to security restrictions. A lot are though. Better to migrate configs and data into a system container because ESXi to Incus will only be VM to VM migration, not VM to system container.
You can export OVF but if I remember correctly then the VMDK needs to be converted to RAW ( qemu-img convert) which will use all the VMDK configured size, so will consume space on the instance that’s doing the conversion. Maybe there are parameters to keep it to used space only and if not then it’ll put a strain on storage and network if you do the conversion off the Incus host.
Incus is UEFI based but there is CSM settings you can use for legacy machine types. You’d have to enable CSM on the migrated instance on Incs or try migrate to UEFI before the migration to Incus. Without one of these they will not boot.
You should remove vmware-tools and install virtio, before the migration (so you can use clones as the source for the migration). I think this is just a Windows requirement but check for your Llinux distros.
If you have clustered software, the migration would be a lot easier by introducing new cluster members to sync the configs and data which are on Incus, and then decommission the ones that are in ESXi.
So depends on your scenario though, and for us it was good to rebuild on Incus and just migrate data or start new. I only ended up migrating 1 Linux VM because it wasn’t worth the trouble.
I agree that the fresh install is usually a better way to go and as I prefer Incus system containers over VM-s then that would be my first choice with Linux servers. There are some specific servers I would rather try to convert at first. Thanx for the tips.
If you export the virtual disk to a raw image (using qemu-img to convert it from vmdk to raw if necessary), then the tool “incus-migrate” will allow you to import it into incus as a new VM or container.
It’s still up to you to adjust limits like number of vCPUs, amount of RAM etc (more important for VMs than containers) and create any additional resources like additional virtual NICs. You might also need to tweak the network configuration in the guest as it’s likely the network interface names will have changed.
For some reason I did not saw mentioned Incus documentation when I tried to google this information (probably my search wasn’t exact enough or somehow my eyes skipped it).
Thank you @candlerb for this information. It was very helpful.
Does this imply that you can now migrate a ESXi Linux VM (RAW) to an Incus System Container?
I believe so, as long as it only does things that a container allows. For example, if it tries to mount block devices or load kernel modules, those won’t work. But for regular workloads it should be OK.
I migrated ubuntu vm from vmware to incus
I used qemu-img with convertion from vmdk to img
The I used bin.linux.incus-migrate.x86_64 for migrating
Migrated vm stops in incus on “Booting from Hard disk”
Anyone know how to fix it ?
Uefi is disabled:
security.csm: “true”
security.secureboot: “false”
I don’t know how this works with Ubuntu, but I remember having similar issues when converting RHEL/CentOS VM in the past because the initramfs didn’t have the driver from the target hypervisor.
I remember having converted KVM VM to XCP-ng and Xen PV drivers were missing in the VM initramfs. I had to add them manually before moving the VM to XCP-ng.
Maybe the disk’s boot.priority setting might help.
I wonder if you can get into the BIOS and select which drive to boot from via the boot management options and see if that works. Will be a bit tricky with timings to get in but I remember getting into it once to do this. It may enter BIOS if you bash your forehead repetitively and very quickly across the keyboard only jokes.
Then we know that it’s just the BIOS boot management that we need advice with.
(depending on whether you want serial console or graphical console). If you press F2 quickly enough you’ll get into the VM’s BIOS - although I don’t think you should need to.
Does it work if you create and run a fresh VM in incus? That is, maybe the problem is with qemu or the BIOS ROM image.
How did you install incus? If you use the Zabbly packages then they are all-in-one, with a working qemu and BIOS included. But if you use the Ubuntu ones then they have interdependencies with other packages.
For me:
# dpkg-query -l incus
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-================================-============-===========================================
ii incus 1:6.0.1-202406290152-ubuntu22.04 amd64 Incus - Container and virtualization daemon
# dpkg-query -L incus | grep -i bios
/opt/incus/share/qemu/seabios.bin
/opt/incus/share/qemu/vgabios-qxl.bin
/opt/incus/share/qemu/vgabios-virtio.bin
I am having incus installed from Zabbly (6.0.1 LTS).
For fresh VM created in incus I am able to access to bios via F2.
Problem with BIOS entry which I have is for the vm migrated via bin.linux.incus-migrate.x86_64 tool (which is not UEFI)
It might be interesting to try “fdisk -l” on incus’s stored copy of the VM disk image file, to see if it’s partitioned as expected.
It is possible to do a rescue boot of a VM, but it’s a bit fiddly. Firstly, you have to download an ISO image and import it. Then you have to attach the ISO image as a device, and set the VM to boot from the ISO.