Hi, I have been an avid KVM user for many years, and have lots of BIOS formated Virsh VM’s, Windows, Ubuntu and CentOS which I would like to convert over to using LXD.
Stage 1 is working on methods for converting these to UEFI working KVM VM’s, which I have sucsessfully done with a Windows VM (Windows server 2019), this now boots correctly with Virsh UEFI, Q35. I have then converted this qcow2 image to raw:
sudo qemu-img convert -f qcow2 -O raw winUEFI.qcow2 root.img
And placed the root.img in a newly created VM, overwriting the default root.img.
I then boot this Windows VM, which appreas to start to boot (showing small spinning circle), but then errors with:
INACCESSIBLE BOOT DEVICE
Whilst in Virsh, the VM was using VirtIO disk and netowrk, all working fine, so Windows already has these drivers installed.
What could the issue be please?
I did not know that, thank you for the info, although I thought VirtIO was the fastest non-emulated driver, why use SCSI, is it slower (apologies for my lack of knowledge).
OK, I have just added a 2nd SCSI HDD in the Virsh VM (with VirtIO SCSI controller), booted the VM and made sure SCSI drivers are all good. Shutdown the VM, then change the Boot drive from VirtIO to SCSI, rebooted the VM, all good. I then shutdown, converted the qcow2 image, copied to LXD VM location (as per above), and now the Windows LXD VM boots fine, no issues.
Thanks for your help.
virtio-scsi in general is far easier for guests to support properly and offers a more traditional PCI layout (controller with bunch of drives attached) which was convenient for us. This also means virtio-scsi can handle up to 16383 drives per target and up to 255 targets whereas virtio-blk is limited to just 28 drives.
It also handles passthrough of host devices much better than virtio-blk does (no translation needed).
On the performance front, looking at some recent benchmarks, it seems like it can go either way and overall performance is usually within 10% of each other.