Windows Virsh / Libvirt VM - convert to LXD VM - INACCESSIBLE BOOT DEVICE

Hi, I have been an avid KVM user for many years, and have lots of BIOS formated Virsh VM’s, Windows, Ubuntu and CentOS which I would like to convert over to using LXD.
Stage 1 is working on methods for converting these to UEFI working KVM VM’s, which I have sucsessfully done with a Windows VM (Windows server 2019), this now boots correctly with Virsh UEFI, Q35. I have then converted this qcow2 image to raw:

sudo qemu-img convert -f qcow2 -O raw winUEFI.qcow2 root.img

And placed the root.img in a newly created VM, overwriting the default root.img.

lxc init winserver --empty --vm -c security.secureboot=false -c limits.cpu=8 -c limits.memory=8GB

I then boot this Windows VM, which appreas to start to boot (showing small spinning circle), but then errors with:

INACCESSIBLE BOOT DEVICE

Whilst in Virsh, the VM was using VirtIO disk and netowrk, all working fine, so Windows already has these drivers installed.
What could the issue be please?

Kind regards.

I wonder if its a difference between how the root disk is attached to the VM in LXD compared to previously. We use SCSI in LXD.

You use SCSI in LXD, not VirtIO?

We use virtio-scsi-pci:

Drives are connected to the qemu_scsi.0 bus as type scsi-hd.

I did not know that, thank you for the info, although I thought VirtIO was the fastest non-emulated driver, why use SCSI, is it slower (apologies for my lack of knowledge).
OK, I have just added a 2nd SCSI HDD in the Virsh VM (with VirtIO SCSI controller), booted the VM and made sure SCSI drivers are all good. Shutdown the VM, then change the Boot drive from VirtIO to SCSI, rebooted the VM, all good. I then shutdown, converted the qcow2 image, copied to LXD VM location (as per above), and now the Windows LXD VM boots fine, no issues.
Thanks for your help.

1 Like

@stgraber why do we use virtio-scsi-pci? Thanks

virtio-scsi in general is far easier for guests to support properly and offers a more traditional PCI layout (controller with bunch of drives attached) which was convenient for us. This also means virtio-scsi can handle up to 16383 drives per target and up to 255 targets whereas virtio-blk is limited to just 28 drives.

It also handles passthrough of host devices much better than virtio-blk does (no translation needed).

On the performance front, looking at some recent benchmarks, it seems like it can go either way and overall performance is usually within 10% of each other.

2 Likes

Thats great, thanks guys.