Is there a default user for Centos8 VM?
I can’t login ad also the lxd agent don’t start so i can’t have a shell
Is there a default user for Centos8 VM?
CentOS 8 recently broke the agent support by removing the 9p kernel driver from their kernel without any consultation or attempt at making it work again…
Unless we find a way out of this soon, we’ll just delete those images completely as they are indeed a bit useless without a working agent…
If you’re using
images:centos/8/cloud you may be able to provision a user inside it through cloud-init by setting the
user.user-data to suitable cloud-init config AND attaching a
cloud-init:config disk device to the instance.
This may be noob question as I m new to qemu and virtio and dont understand how all these drivers work,
but I noticed in other tutorials without LXD on command line something people specify it like this
qemu-system-x86_64 -machine type=q35 -accel kvm -cpu host --bios /snap/lxd/current/share/qemu/OVMF_CODE.fd -m 4096 -smp 2 -drive file=./root.img,index=0,media=disk,format=raw,if=virtio
emphasis on if=virtio
but in lxd configs I found that,
file = “/var/snap/lxd/common/lxd/storage-pools/default/virtual-machines/ubuntu-test/root.img”
format = “raw”
if = “none”
cache = “none”
aio = “native”
discard = “on”
file.locking = “off”
driver = “scsi-hd”
bus = “qemu_scsi.0”
channel = “0”
scsi-id = “0”
lun = “1”
drive = “lxd_root”
bootindex = “0”
if = none, and in windows under device manager you can see it uses different driver under disks(something qemu instead redhat virtio). Is this way it supposed to be? I see above in config there is virtio controller dont really understand if it is getting used.
This is from stable channel in ubuntu 18 LXD 4.0.2
That’s the difference between virtio-disk and virtio-scsi, LXD uses the latter as it’s generally more performant.
Is there a description on how to convert an existing kvm virtual machine (running under libvirt) to lxd VM?
First you need to convert the VM to UEFI + GPT, you can do that while keeping it on libvirt.
Depending on your disk layout this may be difficult/impossible though.
Once that’s done, you can create an empty LXD VM with
lxc init --empty --vm NAME and replace its raw disk with that from libvirt and things should just work, well, unless your distro’s UEFI isn’t signed, in which case you need to set
Thanks for the info. But why this strange requirement (uefi+gpt)?
We only emulate modern virtio hardware and want a stable, safe and maintained platform on multiple architectures, so we only support Q35 hardware definition and only OVMF for firmware.
UEFI requires GPT so that’s how that part of the requirement comes about.
For official Ubuntu images, cloud-init must be used along with a config drive to seed a default user into the VM and allow console access.
This is not needed for Ubuntu 20.04?
lxc init ubuntu:20.04 ubuntu --vm
lxc config device add ubuntu config disk source=cloud-init:config
This seems to be working fine without all the cloud-configs. Is that true? I can access the VM directly with
lxc exec <VM> bash
Yes indeed, it appears the lxd-agent support has made it into official Focal images now.
Hi all, I tried the Windows machine building steps on Ubuntu Server 20.04 and I hit rock bottom when launching the machine. Maybe it’s related to the snap LXD installation, but it seems to have serious issues with qemu sandboxing.
root@shadow:~# echo -n '-device virtio-vga -vnc :1 -drive file=/data/virt/iso/WindowsServer2019/17763.737.190906-2324.rs5_release_svc_refresh_SERVERESSENTIALS_OEM_x64FRE_en-us_1.iso,index=0,media=cdrom,if=ide -drive file=/data/virt/iso/virtio-win-0.1.173.iso,index=1,media=cdrom,if=ide' | lxc config set wintest raw.qemu - root@shadow:~# lxc start wintest Error: Failed to run: /snap/lxd/current/bin/lxd forklimits limit=memlock:unlimited:unlimited -- /snap/lxd/16100/bin/qemu-system-x86_64 -S -name wintest -uuid 0aec4ad7-45f4-4bdb-a0de-a3268471d55a -daemonize -cpu host -nographic -serial chardev:console -nodefaults -no-reboot -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/wintest/qemu.conf -pidfile /var/snap/lxd/common/lxd/logs/wintest/qemu.pid -D /var/snap/lxd/common/lxd/logs/wintest/qemu.log -chroot /var/snap/lxd/common/lxd/virtual-machines/wintest -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd -device virtio-vga -vnc :1 -drive file=/data/virt/iso/WindowsServer2019/17763.737.190906-2324.rs5_release_svc_refresh_SERVERESSENTIALS_OEM_x64FRE_en-us_1.iso,index=0,media=cdrom,if=ide -drive file=/data/virt/iso/virtio-win-0.1.173.iso,index=1,media=cdrom,if=ide: : exit status 1 Try `lxc info --show-log wintest` for more info
root@shadow:~# ls /data/virt/iso/virtio-win-0.1.173.iso /data/virt/iso/virtio-win-0.1.173.iso root@shadow:~# lxc info --show-log wintest Name: wintest Location: none Remote: unix:// Architecture: x86_64 Created: 2020/07/28 16:45 UTC Status: Stopped Type: virtual-machine Profiles: default Log: qemu-system-x86_64: -drive file=/data/virt/iso/WindowsServer2019/17763.737.190906-2324.rs5_release_svc_refresh_SERVERESSENTIALS_OEM_x64FRE_en-us_1.iso,index=0,media=cdrom,if=ide: Could not open '/data/virt/iso/WindowsServer2019/17763.737.190906-2324.rs5_release_svc_refresh_SERVERESSENTIALS_OEM_x64FRE_en-us_1.iso': No such file or directory
The CD images exist and are accessible by any user in the system. Is there any way to disable the sandboxing feature of qemu, or is there any other issue preventing this to work?
Hi, I’ve done a quick test copying the two ISO files in the same folder as the virtual machine and setting the configuration parameters to
root@shadow:~# echo -n '-device virtio-vga -vnc 0.0.0.0:5900 -drive file=/var/snap/lxd/common/lxd/storage-pools/default/virtual-machines/wintest/17763.737.190906-2324.rs5_release_svc_refresh_SERVERESSENTIALS_OEM_x64FRE_en-us_1.iso,index=0,media=cdrom,if=ide -drive file=/var/snap/lxd/common/lxd/storage-pools/default/virtual-machines/wintest/virtio-win-0.1.173.iso,index=1,media=cdrom,if=ide' | lxc config set wintest raw.qemu -
and then it seems to be booting correctly, so there is an issue with chroot/sandboxing in UbuntuServer 20.04 with LXD on snap.
Yes snap uses its own mount namespace so referencing files that exist outside of that mount namespace is not going to work directly. There is a path inside the mount namespace that gets you back to the host’s file system. However may I ask why you are not using the built in LXD disk device type to add your ISO drives? E.g.
lxc config device add v1 myiso disk source=/path/to/my/iso?
If you prefix the path with
/var/lib/snapd/hostfs it should work fine.
The example used
/home which happens to always be mapped into the snap, that’s why it works fine for those following the exact instructions.
Note that this only affects
raw.qemu, paths handled by LXD itself do know how to translate as needed.
Thanks a lot, @stgraber , prefixing the paths with
/var/lib/snapd/hostfs worked, and I can also confirm it worked cross-filesystems (my
/data folder is a FS mount from a separate drive).
I also managed to export the image after
sysprep and launch another VM instance based on that image, but I had to set the
-c security.secureboot=false configuration value for it to start (WinSRV2019).
security.secureboot=false is going to be required at least until someone gets a WHQL certified version of the virtio-scsi driver…
exit at this shell, then you can choose the Boot Manager.
I tried Win10 and it works - only there is no network card.
+-------+---------+------+------+-----------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------+---------+------+------+-----------------+-----------+ | win10 | RUNNING | | | VIRTUAL-MACHINE | 0 | +-------+---------+------+------+-----------------+-----------+
config: limits.cpu: "8" limits.memory: 10GB raw.qemu: -device virtio-vga -vnc :1 -drive file=/home/mgaerber/Downloads/Win10_2004_English_x64.iso,index=0,media=cdrom,if=ide -drive file=/home/mgaerber/Downloads/virtio-win-0.1.173.iso,index=1,media=cdrom,if=ide security.secureboot: "false" volatile.eth0.host_name: tap02192319 volatile.eth0.hwaddr: 00:16:3e:3a:7e:4a volatile.last_state.power: RUNNING volatile.vm.uuid: 1fda3fd0-6e71-4575-99ad-0229a4800f76 devices: root: path: / pool: default size: 60GB type: disk ephemeral: false profiles: - default stateful: false description: ""
Normally your default profile will have a network card attached to some kind of bridge.
Can you show
lxc config show --expanded win10? This may then show it there.
On the Windows side, the network card will be using virtio-net but that’s supported by the drivers on the ISO. If you haven’t already, make sure that all drivers are installed in the VM, this will make a big difference in the whole experience.