2nd System upgraded from Ubuntu 20.04 w/working LXD to Ubuntu 22.04 - LXD again not working

What is the output of sudo update-grub?

I did an update to 22.04 from 20.04 today and everything is running perfectly. I am not seeing this bug.

Which “bug”. There were a couple talked about in the thread

I am seeing absolutely no problems.

1 Like

Hi Brian, I got a very similar situation, and the cultprit was surprisingly umask. I wonder if this is also the case for you.

FWIW worth, after upgrading my Ubuntu 20.04 to 22.04, all the servers have the same issue when I try to create a Centos/7

lxc launch images:centos/7 centtie-7

Gives me error:

Error: The image used by this instance requires a CGroupV1 host system

I’ve tried the various suggestions of changing the umask detailed here

I had also tried changing the line in my /etc/default/grub




GRUB_CMDLINE_LINUX_DEFAULT="quiet splash systemd.unified_cgroup_hierarchy=false"

and then doing:

sudo grub-mkconfig -o /boot/grub/grub.cfg

and then rebooting the host.

None of those helped alleviate the problem. Am I missing a step somewhere. I have no issue creating this on a Ubuntu 20.04 host. I can create Ubuntu and Debian latest versions of containers fine. I haven’t tried really old versions beside the centos 7 yet.

I’m running lxd version 5.8 on all the servers.

uname -a
on my Arm host is running
5.15.0-1026-aws 2022 aarch64 aarch64 aarch64 GNU/Linux

on one of my x64 hosts having the same issue, I’m running

5.15.0-50-generic 2022 x86_64 x86_64 x86_64 GNU/Linux

Forgot to mention, I also tried:

sudo upgrade-grub

and the output is

Sourcing file /etc/default/grub' Sourcing file /etc/default/grub.d/40-force-partuuid.cfg’
Sourcing file /etc/default/grub.d/50-cloudimg-settings.cfg' Sourcing file /etc/default/grub.d/init-select.cfg’
Generating grub configuration file …
GRUB_FORCE_PARTUUID is set, will attempt initrdless boot
Found linux image: /boot/vmlinuz-5.15.0-1026-aws
Found initrd image: /boot/initrd.img-5.15.0-1026-aws
Found linux image: /boot/vmlinuz-5.15.0-1022-aws
Found initrd image: /boot/initrd.img-5.15.0-1022-aws
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
Adding boot menu entry for UEFI Firmware Settings …

The other hosts I have having the issue are all physical machines. This is the only one that is a Virtual machine. They all return similar output

What does cat /proc/cmdline show?

For this

cat /proc/cmdline

One of my x64 physical hosts with the issue has

BOOT_IMAGE=/vmlinuz-5.15.0-50-generic root=/dev/mapper/lvm-root ro console=ttyS1,115200n8 quiet

another physical host I recently upgraded with same issue has

BOOT_IMAGE=/vmlinuz-5.15.0-48-generic root=/dev/mapper/lvm-root ro quiet splash console=ttyS1,115200n8 vt.handoff=7

the amazon ARM VM one with same issue has:

BOOT_IMAGE=/boot/vmlinuz-5.15.0-1026-aws root=PARTUUID=5141fba5-5f1f-4a10-b95e-5ddeff69ef83 ro console=ttyS0 nvme_core.io_timeout=4294967295 panic=-1

So I do this:

Edit /etc/default/grub and ensure that GRUB_CMDLINE_LINUX contains systemd.unified_cgroup_hierarchy=false:



Then run:

sudo update-grub

Then reboot, and you should see it in the kernel cmdline:

cat /proc/cmdline 
BOOT_IMAGE=/boot/vmlinuz-5.15.0-56-generic root=UUID=3710b4e6-c6e9-4675-a1b2-53f524dce111 ro systemd.unified_cgroup_hierarchy=false quiet splash console=tty1 console=ttyS0 vt.handoff=7

Thanks that did the trick. I guess my mistake before was putting it in the GRUB_CMDLINE_LINUX_DEFAULT line instead of the GRUB_CMDLINE_LINUX line

1 Like