Running virtual machines with LXD 4.0

OK!.

distrobuilder is working like a magic.

I’ve updated the instructions to reference the work we did on distrobuilder and on LXD booting the resulting ISO, no more need for workarounds.

1 Like

Are there any pre-built/community images of win10 around which would spare me the effort of creating it as part of a small test?

I was hoping to see if I could use this with gitlab-runner and lxd executors…

No, unfortunately it’s not legal to re-distribute a Windows ISO image (or VM image) which is why we can only provide the tool that generates it for yourself and can’t just make the result available as we do for our other images.

1 Like

6 posts were split to a new topic: Unable to export/import Windows VM

3 posts were split to a new topic: VM not booting from Windows ISO

Is there or will be there a new tutorial for LXD version 4.0, as you have for LXD version 2.0?

I’m not sure what you mean. LXD 2.0 didn’t support VMs, so the existing support in LXD 4.0 remains.

Is it possible to get qemu to start with the qxl vga device instead of the virtio-gpu

If I add
raw.qemu: -device qxl
to the config then 2 vga devices are present on the machine.

I have is a windows10 vm running, and are remoting to it via a windows workstation with
lxc console u1:vm-2 --type=vga

2 windows appear for each of the devices.

adding
raw.qemu: -vga qxl

causes it to not boot as there is a conflict with the (I asssume) default qemu.conf setting put out by lxd

qemu-system-x86_64:/var/snap/lxd/common/lxd/logs/vm-2/qemu.conf:79: PCI: slot 1 function 0 not available for pcie-root-port, in use by qxl-vga

Many Thanks

You can’t replace the default VGA adapter (virtio-vga), but you may indeed be able to add another one as QXL. Though you’ll most likely need to specify where on the PCIe bus it needs to sit to avoid getting into conflicts like you’re showing above.

Defining a new PCI bus dedicated for that device is the most likely way to avoid such issues.

Note that we’ve seen a fair bit of activity around a native virtio-gpu driver for Windows, so our hope is that in the near future this won’t be an issue anymore and we’ll get more than a plain VGA driver on Windows.

Thanks for the reply

I found the virtio-gpu binary driver here:
https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.204-1/virtio-win-0.1.204.iso

Windows installs the driver; the responsivness is similar to the default driver. In comparison a similar proxmox machine running the qxl driver with remote-viewer is quite usable.

Is the lagginess an issue with the virtio-gpu to spice or something with the display being routed through the lxc console command?

No complaints, just for information. I’ll use it with rdp.

RDP is always going to be the best experience, the QXL or virtio-gpu console is mostly really meant for installation. I need to test those new virtio-win drivers, last I played, you’d only get basic VGA, maybe that has now changed.

Hi,
I recompiled lxd with qxl-vga instead virtio-vga.
Then I start the windows VM without console and connect the spice socket with:

spicy --uri=“spice+unix:///var/log/lxd/win10/qemu.spice”

I have only one display and everything seems working fine, clipping, mouse etc.

It worked for me also a test hex-eding the lxd binary :slightly_smiling_face:

Cheers,

massimo

Thanks for the tutorial!

I had two problems when following the instructions:

1) Secure Boot

I needed to set security.secureboot=false to get my VM to boot

When booting the VM, the console shows

BdsDxe: loading Boot0004 "UEFI QEMU QEMU HARDDISK " from PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/Scsi(0x0
,0x1)                                                                                                   BdsDxe: failed to load Boot0004 "UEFI QEMU QEMU HARDDISK " from PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/S
csi(0x0,0x1): Access Denied

2) Apt Repos for Arm64

The apt repos for arm64 are not available at the http://us.archive.ubuntu.com/ubuntu/ address in the instructions - I needed to find a mirror that hosted “ubuntu-ports”.

Details

Host System : RPi 4, 8Gb RAM, Ubuntu 21.04
Host LXD: installed via snap
Storage: ZFS with 1 pool on external USB block device, 512GB
VM: Ubuntu 18.04 (but tried other versions and they showed the same symptoms)

Yeah, those two things are indeed specific to Arm64, though I’ve seen at least our own recent images starting to behave with SecureBoot on ARM recently.

How can I connect with public ip address (host) externally from Windows or Ubuntu 20.04 at home? And is it possible to specify a port? I can’t open virt-viewer on the terminal with lxc console win10 --type=vga remotely.

The way you’d normally do that is by configuring LXD to listen on the network:

  • lxc config set core.https_address :8443
  • lxc config set core.trust_password some-password

Then on your Windows or Ubuntu 20.04 system, install LXD and remote-viewer.
For Windows that should be: choco install lxc virt-viewer
For Ubuntu: snap install lxd && apt install virt-viewer

Then add your remote server to LXD with:

  • lxc remote add my-server IP-ADDRESS
  • Accept the certificate and enter the trust password
  • lxc remote switch my-server (to have all commands sent to the server rather than local)

At which point you can issue lxc console win10 --type=vga from your Windows or Ubuntu system and that will talk to the remote LXD using its API and get you access to the VGA console.

1 Like

What is the default limits.memory setting for VMs?

It seems to be 1GB (because that’s what my VM seems to have available to it, and I haven’t configured it explicitly)

But the documentation suggests that the default setting is “-”, which means “all”, right?. Instances | LXD.

“all” seems an unlikely setting for a VM, so I’m guessing the documentation is wrong, but just wanted to check.

Looks like we need to improve the doc a tiny bit to cover the defaults for virtual machines.

For containers, it’s correct that no limit means you get to see whatever the host has to offer. For VMs it’s not something we can do, so it defaults to 1 vCPU and 1GB of RAM.