Running virtual machines with LXD 4.0

Ah yes, apologies.

Although the DB changes from 4.4 (which OP is running) to 4.5 mean the downgrade limitation will still existing on edge or 4.6.

Hi there, thanks for all your hard work, this project is looking great! I’ve spent some time setting up a VM and am trying to launch it as GUI over VGA with SPICE following the instructions, but I can’t get past the password auth haha. Apologies if this should be clear, but I didn’t use the official Ubuntu image as I thought it’d make it easier to skip the init procedure described in this thread. I’m using the image for Ubuntu 18.04 VM from the community images: remote and when running from lxc console and pasting the password given above for username ubuntu it doesn’t let me in. [LXD 4.5]

Our images never have accounts setup with passwords.
You have two ways to set passwords up:

  • Use a cloud variant of the image and pass it some cloud-init user data
  • With any image, use lxc exec NAME -- passwd ubuntu to set a password on the ubuntu user (or any other user you may want to get login access to)

I am trying to install Zentyal, which needs its ISO as no pre-built images are available. I cannot get the BIOS menu to show, I tried many, many times, my “enter” press is followed by a very quick “ESC” hammering. You know like these good old Atari games… but I could get the BIOS only once. Then I accidentally selected the wrong boot device …

Settings the drives boot order in the config, like libvirt is much needed.

I’ll go the libvirt road until something better is made possible. Thanks for your hard work and support.

LXD supports booting VMs from an ISO image, by attaching a custom disk, and you will likely also need to disable secure boot:

By default the root disk has a boot priority of 1, so setting it higher than that will cause it to be tried first. However in the case of an empty VM the root disk boot will fail and the CD will be tried next anyway.

lxc init v1 --vm --empty
lxc confg set v1 security.secureboot=false
lxc config device add v1 cdrom disk source=/home/user/os.iso boot.priority=2

Note, you may also need to modify the cdrom’s kernel boot parameters to output to the console as some systems only output to the graphics card initially.

E.g for Alpine you would use something like this

Well, you can also use lxc console --type=vga NAME these days to get the VGA console.

1 Like

That was what I was looking for thanks !

Well the server is headless it won’t work, right ?

LXD automatically uses either spicy or remote-viewer when present.
As neither could be found, the raw SPICE socket can be found at:

As such I get a full black screen within the container console. With UEFI enabled, I get nothing. But Zentyal has a text based installer. Shouldn’t it show something ?

The logs are empty.

Ok I had to add the server as a remote to my workstation to get VGA and disabling UEFI dropped me to a grub command line. With UEFI enabled it works. It now complains about the installation cdrom not mounting, from the installer it launched from the mounted iso …

Just some feedback: my LXD has been updated to version 4.6 and now the problems with creation and copy of VMs are gone. So the fix seem to work well! :+1:

1 Like

Do you have the same issue with installing ubuntu 18.04, that Zentyal is apparently based on?

Hi, I created a windows server 2019 VM based on this document with a memory limit of 8GB. When the virtual machine is in running state it reserve the complete physical memory allocated to the VM from the host, not only use what its needed. I installed Balloon driver and Balloon service inside the VM, but still reserve the memory. Does anyone have the same situation?
Is there any way to allow the VM to use memory what its needed?

I just started running into what appears to be this same issue.

I’m using SNAP LXD v4.6 on Ubuntu 20.04.1

$ lxd --version

Tom - when you said

Although the DB changes from 4.4 (which OP is running) to 4.5 mean the downgrade limitation will still existing on edge or 4.6.

To make the above clear to me…
does this mean the patch/fix is already in LXD v4.6 or not?

The patch is in v4.6, please can you create a new thread with the details of the issue you see thanks.

The command above left out the container name.

It should be…

lxc config set CN_NAME raw.apparmor “/home/** rwk,”

I am trying to launch a Windows 10 VM with LXD 4.6 (snap package).
I am following the instructions shown above, and made sure to

$ lxc config device override win10 root size=25GB
Device root overridden for win10

When I am later about to perform the installation of Windows 10 and select where to install, I am presented with the screen that the total size for Drive 0 Unallocated Space is 9.3GB.
ZFS reports that there is much more free space (25GB should have been OK).
Does the 9.3GB_refer to continuous space on the physical drive?
Is the_9.3GB
some remnant configuration from previous attempts to create a VM?

I’ve also noticed something wrong with resizes yesterday, I’ll take a look.

@stgraber I’ve got a fix for this here:


Anyone tried to run windows vm on ceph?

Windows10 LXD 4.0.3

I tried to run windows10 on ceph, and performance seems inadeqately slow, everything becomes slugish, even simple file manager operations like browsing folders, compared to local storage. I tried to google it and looks like I m not alone.

Maybe there is some configuration options to boost performance?

that indicates the LXD VMs utilizes KVM. What I do not quite get about the motivation/benefit of this new feature from browsing though this Topic is:

  1. is this an attempt to be a replacement for the ‘regular’ KVM/QEMU Tools (like virsh cli, cockpit, virtual-machine manager)? And if so, why do you feel the need to replace those
  2. what is the difference between an LXD Container & and LXC VM (folders on the host, vs imagefile on the host and likewise more strict separation and likewise potentially more security)?