Its also really odd to me that:
- A “clean install” of 22.04 onto a server then installing LXD works fine.
- But “upgrading (do-release-upgrade -d)” an existing 20.04 system to 22.04 is causing these problems even though the upgrade itself succeeds?
Its also really odd to me that:
I think found the problem and it looks like the Ubuntu Xenial & Jammy Containers can’t start due to a problem with mounting CGROUP at /sys/fs/cgroup/systemd
But the Alpine Container starts just fine !
-----------------------------------------------------------------------------------------------------------------
$ lxc console --show-log cn1
Console log:
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!] Failed to mount API filesystems, freezing.
Freezing execution.
-----------------------------------------------------------------------------------------------------------------
$ lxc console --show-log cn2
Console log:
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!] Failed to mount API filesystems.
Exiting PID 1…
-----------------------------------------------------------------------------------------------------------------
$ lxc console --show-log cn3
Console log:
OpenRC 0.44.10 is starting up Linux 5.15.0-27-generic (x86_64) [LXC]
Welcome to Alpine Linux 3.12
Kernel 5.15.0-27-generic on an x86_64 (/dev/console)
I saw others have had this same CGROUP2 problem previously…
example: Containers Fail To Start Ubuntu 21.10 - #9 by RossMadness
Stephane’s proposed workaround is:
Not too sure what’s going on in your case, we usually don’t see quite that much breakage because of cgroup2.In any case, booting your host system with
systemd.unified_cgroup_hierarchy=false
passed to the kernel should take care of this.
Brian
@tomp @stgraber
Could this be the bug/core problem with CGROUP2
https://www.mail-archive.com/ubuntu-bugs@lists.ubuntu.com/msg6024379.html
Either on the Host side or with the Ubuntu “images” ??
FYI… my “Host” is running both cgroup1 and cgroup2:
$ grep cgroup /proc/filesystems
nodev cgroup
nodev cgroup2
Okay… per all the above clues I finally fixed this on my Ubuntu 22.04 system (upgraded from 20.04).
It was a CGROUP2 bug/problem so I disabled “cgroup2” use on the HOST and now things work and all the containers startup Ok.
add the following string to the GRUB_CMDLINE_LINUX
line in /etc/default/grub
and then run sudo update-grub
.
systemd.unified_cgroup_hierarchy=0
Glad you got it working. Yes running older distros on a Cgroupv2 only system doesn’t work, see Error: The image used by this instance requires a CGroupV1 host system when using clustering - #2 by tomp
Tom
These weren’t “older” versions.
It was 22.04 and 20.04
Strange, seems like the cgroup setup isn’t correct on upgrade by systemd. Might be worth opening an issue with Ubuntu about this.
Yeah that’s what bothered me
A clean install of 22.04 and everything works including LXD
Doing an “upgrade” from 20.04 to 22.04 and LXD gets broken until I remove CGROUP 2
Yep sounds like an issue with how systemd is setting up cgroups in the upgrade from Focal to Jammy.
Filed a bug:
[Bug 1971571] [NEW] ubuntu 22.04 cgroup2 works for clean install but upgrade to 22.04 causes cgroup2 problems
Hi All, I’ve run afoul of this issue on my laptop and I went to add GRUB_CMDLINE_LINUX=“systemd.unified_cgroup_hierarchy=0” to my /etc/default/grub However, the file didn’t exist and even creating it and adding that line with a following update-grub hasn’t worked. Is there a way to verify this setting after it it set?
Much Appreciated.
What OS are you running?
Ubuntu 22.04
Same as me
That file exists by default on fresh installations, and is the correct place to put it.
Did you upgrade or do a fresh install of Ubuntu?
Upgrade from 20.04
Sounds like a bug report to Ubuntu upstream could be in order as there seems to be some issues around upgrading from previous versions. Is there a /etc/default/grub.bak
file?
What is the output of sudo update-grub
?