Running virtual machines with LXD 4.0

@tomp I have removed the mount option compress=lzo and indeed the VM from images:ubuntu/focal starts fine now:

$ lxc launch images:ubuntu/focal ubuntu1 --vm --target host1 && lxc console ubuntu1
Creating ubuntu1
Starting ubuntu1                            
To detach from the console, press: <ctrl>+a q
BdsDxe: loading Boot0001 "UEFI QEMU QEMU HARDDISK " from PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/Scsi(0x0,0x1)
BdsDxe: starting Boot0001 "UEFI QEMU QEMU HARDDISK " from PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/Scsi(0x0,0x1)
error: file `/boot/' not found.
error: no such device: /.disk/info.
error: no such device: /.disk/mini-info.

[    0.516342] Initramfs unpacking failed: Decoding failed

Ubuntu 20.04.1 LTS ubuntu1 ttyS0

ubuntu1 login:  

Who would have thought that could make a difference!

However the problem with the snapshot + copy still takes place (and the console output for ubuntu:focal looks similar including the root device error). So I am looking forward to your patch!

You should be able to try the patch now if you switch to the snap edge channel:

Note however that due to DB schema changes between v4.4 and v4.5 you will not be able to downgrade back to v4.4 (although v4.5 is due to be out soon anyway).

snap refresh lxd --channel=latest/edge

Remember to switch the channel to latest/stable once v4.5 is released though.

Actually 4.5 is current with 4.6 due out soon.

Ah yes, apologies.

Although the DB changes from 4.4 (which OP is running) to 4.5 mean the downgrade limitation will still existing on edge or 4.6.

Hi there, thanks for all your hard work, this project is looking great! I’ve spent some time setting up a VM and am trying to launch it as GUI over VGA with SPICE following the instructions, but I can’t get past the password auth haha. Apologies if this should be clear, but I didn’t use the official Ubuntu image as I thought it’d make it easier to skip the init procedure described in this thread. I’m using the image for Ubuntu 18.04 VM from the community images: remote and when running from lxc console and pasting the password given above for username ubuntu it doesn’t let me in. [LXD 4.5]

Our images never have accounts setup with passwords.
You have two ways to set passwords up:

  • Use a cloud variant of the image and pass it some cloud-init user data
  • With any image, use lxc exec NAME -- passwd ubuntu to set a password on the ubuntu user (or any other user you may want to get login access to)
2 Likes

I am trying to install Zentyal, which needs its ISO as no pre-built images are available. I cannot get the BIOS menu to show, I tried many, many times, my “enter” press is followed by a very quick “ESC” hammering. You know like these good old Atari games
 but I could get the BIOS only once. Then I accidentally selected the wrong boot device 


Settings the drives boot order in the config, like libvirt is much needed.

I’ll go the libvirt road until something better is made possible. Thanks for your hard work and support.

LXD supports booting VMs from an ISO image, by attaching a custom disk, and you will likely also need to disable secure boot:

By default the root disk has a boot priority of 1, so setting it higher than that will cause it to be tried first. However in the case of an empty VM the root disk boot will fail and the CD will be tried next anyway.

lxc init v1 --vm --empty
lxc confg set v1 security.secureboot=false
lxc config device add v1 cdrom disk source=/home/user/os.iso boot.priority=2

Note, you may also need to modify the cdrom’s kernel boot parameters to output to the console as some systems only output to the graphics card initially.

E.g for Alpine you would use something like this https://wiki.alpinelinux.org/wiki/Enable_Serial_Console_on_Boot

Well, you can also use lxc console --type=vga NAME these days to get the VGA console.

1 Like

That was what I was looking for thanks !

Well the server is headless it won’t work, right ?

LXD automatically uses either spicy or remote-viewer when present.
As neither could be found, the raw SPICE socket can be found at:
  /root/snap/lxd/16922/.config/lxc/sockets/615376158.spice

As such I get a full black screen within the container console. With UEFI enabled, I get nothing. But Zentyal has a text based installer. Shouldn’t it show something ?

The logs are empty.

Ok I had to add the server as a remote to my workstation to get VGA and disabling UEFI dropped me to a grub command line. With UEFI enabled it works. It now complains about the installation cdrom not mounting, from the installer it launched from the mounted iso 


Just some feedback: my LXD has been updated to version 4.6 and now the problems with creation and copy of VMs are gone. So the fix seem to work well! :+1:

1 Like

Do you have the same issue with installing ubuntu 18.04, that Zentyal is apparently based on?

Hi, I created a windows server 2019 VM based on this document with a memory limit of 8GB. When the virtual machine is in running state it reserve the complete physical memory allocated to the VM from the host, not only use what its needed. I installed Balloon driver and Balloon service inside the VM, but still reserve the memory. Does anyone have the same situation?
Is there any way to allow the VM to use memory what its needed?

@tomp
I just started running into what appears to be this same issue.

I’m using SNAP LXD v4.6 on Ubuntu 20.04.1

$ lxd --version
4.6

Tom - when you said

Although the DB changes from 4.4 (which OP is running) to 4.5 mean the downgrade limitation will still existing on edge or 4.6.

To make the above clear to me

does this mean the patch/fix is already in LXD v4.6 or not?

The patch is in v4.6, please can you create a new thread with the details of the issue you see thanks.

The command above left out the container name.

It should be


lxc config set CN_NAME raw.apparmor “/home/** rwk,”

I am trying to launch a Windows 10 VM with LXD 4.6 (snap package).
I am following the instructions shown above, and made sure to

$ lxc config device override win10 root size=25GB
Device root overridden for win10
$

When I am later about to perform the installation of Windows 10 and select where to install, I am presented with the screen that the total size for Drive 0 Unallocated Space is 9.3GB.
ZFS reports that there is much more free space (25GB should have been OK).
Does the 9.3GB_refer to continuous space on the physical drive?
Is the_9.3GB
some remnant configuration from previous attempts to create a VM?

I’ve also noticed something wrong with resizes yesterday, I’ll take a look.