I think found the problem and it looks like the Ubuntu Xenial & Jammy Containers can’t start due to a problem with mounting CGROUP at /sys/fs/cgroup/systemd
OpenRC 0.44.10 is starting up Linux 5.15.0-27-generic (x86_64) [LXC]
/proc is already mounted
Mounting /run … * /run/openrc: creating directory
/run/lock: creating directory
/run/lock: correcting owner
Caching service dependencies … [ ok ]
Mounting local filesystems … [ ok ]
Creating user login records … [ ok ]
Cleaning /tmp directory … [ ok ]
Starting busybox syslog … [ ok ]
Starting busybox crond … [ ok ]
Starting networking … * lo … [ ok ]
eth0 …udhcpc: started, v1.35.0
udhcpc: broadcasting discover
udhcpc: broadcasting select for 10.245.137.108, server 10.245.137.1
udhcpc: lease of 10.245.137.108 obtained from 10.245.137.1, lease time 3600
[ ok ]
Welcome to Alpine Linux 3.12
Kernel 5.15.0-27-generic on an x86_64 (/dev/console)
Okay… per all the above clues I finally fixed this on my Ubuntu 22.04 system (upgraded from 20.04).
It was a CGROUP2 bug/problem so I disabled “cgroup2” use on the HOST and now things work and all the containers startup Ok.
Work-around Solution was:
add the following string to the GRUB_CMDLINE_LINUX line in /etc/default/grub and then run sudo update-grub.
systemd.unified_cgroup_hierarchy=0
It still begs the question why clean Installs of Ubuntu 22.04 LTS work correctly but “Upgrades” from 20.04LTS to 22.04LTS break something with CGROUP2 ?
Hi All, I’ve run afoul of this issue on my laptop and I went to add GRUB_CMDLINE_LINUX=“systemd.unified_cgroup_hierarchy=0” to my /etc/default/grub However, the file didn’t exist and even creating it and adding that line with a following update-grub hasn’t worked. Is there a way to verify this setting after it it set?
Sounds like a bug report to Ubuntu upstream could be in order as there seems to be some issues around upgrading from previous versions. Is there a /etc/default/grub.bak file?