Running virtual machines with LXD 4.0

https://github.com/lxc/lxd/pull/7600 is the fix for this, took us a little while to track down as we’re busy presenting at a conference this week. We’ll cherry-pick it to stable immediately after it gets merged so fix should hit users in the next 4-5 hours.

2 Likes

It’s working now on stable… like magic. Thanks for the quick response!

Is there a default user for Centos8 VM?
I can’t login ad also the lxd agent don’t start so i can’t have a shell

CentOS 8 recently broke the agent support by removing the 9p kernel driver from their kernel without any consultation or attempt at making it work again…

Unless we find a way out of this soon, we’ll just delete those images completely as they are indeed a bit useless without a working agent…

If you’re using images:centos/8/cloud you may be able to provision a user inside it through cloud-init by setting the user.user-data to suitable cloud-init config AND attaching a cloud-init:config disk device to the instance.

1 Like

This may be noob question as I m new to qemu and virtio and dont understand how all these drivers work,

but I noticed in other tutorials without LXD on command line something people specify it like this

qemu-system-x86_64 -machine type=q35 -accel kvm -cpu host --bios /snap/lxd/current/share/qemu/OVMF_CODE.fd -m 4096 -smp 2 -drive file=./root.img,index=0,media=disk,format=raw,if=virtio

emphasis on if=virtio

but in lxd configs I found that,
/var/snap/lxd/common/lxd/logs/ubuntu-test/qemu.conf

root drive

[drive “lxd_root”]
file = “/var/snap/lxd/common/lxd/storage-pools/default/virtual-machines/ubuntu-test/root.img”
format = “raw”
if = “none”
cache = “none”
aio = “native”
discard = “on”
file.locking = “off”

[device “dev-lxd_root”]
driver = “scsi-hd”
bus = “qemu_scsi.0”
channel = “0”
scsi-id = “0”
lun = “1”
drive = “lxd_root”
bootindex = “0”

if = none, and in windows under device manager you can see it uses different driver under disks(something qemu instead redhat virtio). Is this way it supposed to be? I see above in config there is virtio controller dont really understand if it is getting used.

This is from stable channel in ubuntu 18 LXD 4.0.2

That’s the difference between virtio-disk and virtio-scsi, LXD uses the latter as it’s generally more performant.

1 Like

Hi,
Is there a description on how to convert an existing kvm virtual machine (running under libvirt) to lxd VM?

First you need to convert the VM to UEFI + GPT, you can do that while keeping it on libvirt.
Depending on your disk layout this may be difficult/impossible though.

Once that’s done, you can create an empty LXD VM with lxc init --empty --vm NAME and replace its raw disk with that from libvirt and things should just work, well, unless your distro’s UEFI isn’t signed, in which case you need to set security.secureboot=false

Thanks for the info. But why this strange requirement (uefi+gpt)?

We only emulate modern virtio hardware and want a stable, safe and maintained platform on multiple architectures, so we only support Q35 hardware definition and only OVMF for firmware.

UEFI requires GPT so that’s how that part of the requirement comes about.

1 Like

For official Ubuntu images, cloud-init must be used along with a config drive to seed a default user into the VM and allow console access.

This is not needed for Ubuntu 20.04?

lxc init ubuntu:20.04 ubuntu --vm
lxc config device add ubuntu config disk source=cloud-init:config

This seems to be working fine without all the cloud-configs. Is that true? I can access the VM directly with lxc exec <VM> bash

Yes indeed, it appears the lxd-agent support has made it into official Focal images now.

Hi all, I tried the Windows machine building steps on Ubuntu Server 20.04 and I hit rock bottom when launching the machine. Maybe it’s related to the snap LXD installation, but it seems to have serious issues with qemu sandboxing.

root@shadow:~# echo -n '-device virtio-vga -vnc :1 -drive file=/data/virt/iso/WindowsServer2019/17763.737.190906-2324.rs5_release_svc_refresh_SERVERESSENTIALS_OEM_x64FRE_en-us_1.iso,index=0,media=cdrom,if=ide -drive file=/data/virt/iso/virtio-win-0.1.173.iso,index=1,media=cdrom,if=ide' | lxc config set wintest raw.qemu -
root@shadow:~# lxc start wintest 
Error: Failed to run: /snap/lxd/current/bin/lxd forklimits limit=memlock:unlimited:unlimited -- /snap/lxd/16100/bin/qemu-system-x86_64 -S -name wintest -uuid 0aec4ad7-45f4-4bdb-a0de-a3268471d55a -daemonize -cpu host -nographic -serial chardev:console -nodefaults -no-reboot -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/wintest/qemu.conf -pidfile /var/snap/lxd/common/lxd/logs/wintest/qemu.pid -D /var/snap/lxd/common/lxd/logs/wintest/qemu.log -chroot /var/snap/lxd/common/lxd/virtual-machines/wintest -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd -device virtio-vga -vnc :1 -drive file=/data/virt/iso/WindowsServer2019/17763.737.190906-2324.rs5_release_svc_refresh_SERVERESSENTIALS_OEM_x64FRE_en-us_1.iso,index=0,media=cdrom,if=ide -drive file=/data/virt/iso/virtio-win-0.1.173.iso,index=1,media=cdrom,if=ide: : exit status 1
Try `lxc info --show-log wintest` for more info

And then

root@shadow:~# ls /data/virt/iso/virtio-win-0.1.173.iso 
/data/virt/iso/virtio-win-0.1.173.iso
root@shadow:~# lxc info --show-log wintest
Name: wintest
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/07/28 16:45 UTC
Status: Stopped
Type: virtual-machine
Profiles: default

Log:

qemu-system-x86_64: -drive file=/data/virt/iso/WindowsServer2019/17763.737.190906-2324.rs5_release_svc_refresh_SERVERESSENTIALS_OEM_x64FRE_en-us_1.iso,index=0,media=cdrom,if=ide: Could not open '/data/virt/iso/WindowsServer2019/17763.737.190906-2324.rs5_release_svc_refresh_SERVERESSENTIALS_OEM_x64FRE_en-us_1.iso': No such file or directory

The CD images exist and are accessible by any user in the system. Is there any way to disable the sandboxing feature of qemu, or is there any other issue preventing this to work?

Thanks!

Hi, I’ve done a quick test copying the two ISO files in the same folder as the virtual machine and setting the configuration parameters to

root@shadow:~# echo -n '-device virtio-vga -vnc 0.0.0.0:5900 -drive file=/var/snap/lxd/common/lxd/storage-pools/default/virtual-machines/wintest/17763.737.190906-2324.rs5_release_svc_refresh_SERVERESSENTIALS_OEM_x64FRE_en-us_1.iso,index=0,media=cdrom,if=ide -drive file=/var/snap/lxd/common/lxd/storage-pools/default/virtual-machines/wintest/virtio-win-0.1.173.iso,index=1,media=cdrom,if=ide' | lxc config set wintest raw.qemu -

and then it seems to be booting correctly, so there is an issue with chroot/sandboxing in UbuntuServer 20.04 with LXD on snap.

Yes snap uses its own mount namespace so referencing files that exist outside of that mount namespace is not going to work directly. There is a path inside the mount namespace that gets you back to the host’s file system. However may I ask why you are not using the built in LXD disk device type to add your ISO drives? E.g. lxc config device add v1 myiso disk source=/path/to/my/iso?

@stgraber or @monstermunchkin might have a tip on why the manual cdrom ide devices are needed and how to break out of the snap sandbox.

If you prefix the path with /var/lib/snapd/hostfs it should work fine.

The example used /home which happens to always be mapped into the snap, that’s why it works fine for those following the exact instructions.

Note that this only affects raw.qemu, paths handled by LXD itself do know how to translate as needed.

1 Like

Thanks a lot, @stgraber , prefixing the paths with /var/lib/snapd/hostfs worked, and I can also confirm it worked cross-filesystems (my /data folder is a FS mount from a separate drive).

I also managed to export the image after sysprep and launch another VM instance based on that image, but I had to set the -c security.secureboot=false configuration value for it to start (WinSRV2019).

Thanks!

Yeah, security.secureboot=false is going to be required at least until someone gets a WHQL certified version of the virtio-scsi driver…

type exit at this shell, then you can choose the Boot Manager.