Containers do not start. No error message. Nothing in the log

Hi,

I have been using LXD for years on an Ubuntu Budgie 20.04 laptop. I have upgraded to 22.04 and now I cannot start my container. I don’t know if the error is related to the upgrade or not. I have also replaced my SSD by cloning the previous one (dd) and then resizing (gparted). I am not sure if it is relevant.

After the upgrade I had LXD version 4.x (sorry, cannot remember) but then I purged it and did a complete reinstall (snap). Now I have 5.2 (stable). The problem is still the same: containers do not start. There is no error message whatsoever and there is nothing in the log:

root@notebook:~# snap install lxd
lxd 5.2-79c3c3b from Canonical✓ installed

root@notebook:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: lxd-pool
Name of the storage backend to use (zfs, ceph, btrfs, dir, lvm) [default=zfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like the LXD server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 

david@notebook:~$ lxc launch ubuntu:22.04 test
Creating test
Starting test                               

david@notebook:~$ lxc list
+------+---------+------+------+-----------+-----------+
| NAME |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+------+---------+------+------+-----------+-----------+
| test | STOPPED |      |      | CONTAINER | 0         |
+------+---------+------+------+-----------+-----------+

david@notebook:~$ lxc start test --verbose

david@notebook:~$ lxc info --show-log test
Name: test
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2022/06/27 20:02 AEST
Last Used: 2022/06/27 20:04 AEST

Log:

lxc test 20220627100458.843 WARN     conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc test 20220627100458.843 WARN     conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc test 20220627100458.844 WARN     conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc test 20220627100458.844 WARN     conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc test 20220627100459.234 WARN     conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc test 20220627100459.234 WARN     conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing

david@notebook:~$

Any help is appreciated.

Thanks,
David

As far as LXD is concerned, your container started.
This can happen if init in your container immediately crashes.

You may want to look at lxc console --show-log test.

david@notebook:~$ lxc console --show-log test

Console log:

Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!!!!] Failed to mount API filesystems.
Exiting PID 1...

By the way, I do not have systemd file or folder in /sys/fs/cgroup/.

I tried to google this new error message but I am stuck. What’s next? Thanks.

What do you have in /sys/fs/cgroup on the host?

david@notebook:~$ ls -la /sys/fs/cgroup
total 0
dr-xr-xr-x 14 root root 0 jun   28 10:28 .
drwxr-xr-x  8 root root 0 jun   27 12:27 ..
-r--r--r--  1 root root 0 jun   27 12:27 cgroup.controllers
-rw-r--r--  1 root root 0 jun   28 10:24 cgroup.max.depth
-rw-r--r--  1 root root 0 jun   28 10:24 cgroup.max.descendants
-rw-r--r--  1 root root 0 jun   27 19:59 cgroup.procs
-r--r--r--  1 root root 0 jun   28 10:24 cgroup.stat
-rw-r--r--  1 root root 0 jun   27 19:58 cgroup.subtree_control
-rw-r--r--  1 root root 0 jun   27 13:35 cgroup.threads
-rw-r--r--  1 root root 0 jun   28 10:24 cpu.pressure
-r--r--r--  1 root root 0 jun   28 10:24 cpuset.cpus.effective
-r--r--r--  1 root root 0 jun   28 10:24 cpuset.mems.effective
-r--r--r--  1 root root 0 jun   28 10:24 cpu.stat
drwxr-xr-x  2 root root 0 jun   27 19:58 dev-hugepages.mount
drwxr-xr-x  2 root root 0 jun   27 19:58 dev-mqueue.mount
drwxr-xr-x  2 root root 0 jun   27 12:27 init.scope
-rw-r--r--  1 root root 0 jun   28 10:24 io.cost.model
-rw-r--r--  1 root root 0 jun   28 10:24 io.cost.qos
-rw-r--r--  1 root root 0 jun   28 10:24 io.pressure
-rw-r--r--  1 root root 0 jun   28 10:24 io.prio.class
-r--r--r--  1 root root 0 jun   28 10:24 io.stat
drwxr-xr-x  2 root root 0 jun   27 12:32 lxc.pivot
-r--r--r--  1 root root 0 jun   28 10:24 memory.numa_stat
-rw-r--r--  1 root root 0 jun   28 10:24 memory.pressure
-r--r--r--  1 root root 0 jun   28 10:24 memory.stat
-r--r--r--  1 root root 0 jun   28 10:24 misc.capacity
dr-xr-xr-x  3 root root 0 jun   27 12:27 net_cls
drwxr-xr-x  2 root root 0 jun   27 19:58 proc-sys-fs-binfmt_misc.mount
drwxr-xr-x  2 root root 0 jun   27 19:58 sys-fs-fuse-connections.mount
drwxr-xr-x  2 root root 0 jun   27 19:58 sys-kernel-config.mount
drwxr-xr-x  2 root root 0 jun   27 19:58 sys-kernel-debug.mount
drwxr-xr-x  2 root root 0 jun   27 19:58 sys-kernel-tracing.mount
drwxr-xr-x 74 root root 0 jun   28 13:56 system.slice
drwxr-xr-x  3 root root 0 jun   27 19:58 user.slice

Have you tried disabling unified cgroup hierarchy?

1 Like

Hi Thomas,

I have added the argument to the kernel and now the container started.

What puzzles me though is that I am not running an older container on more modern host systems. As you can see from the above example, I have just installed LXD on Ubuntu 22.04 and I have tried to launch a brand new Ubuntu 22.04 container.

So while now it works, I would still like to understand what the root cause of the problem is. And also, what disadvantage do I have by using CGroupV1 mode?

Thanks,
David

Was the Ubuntu 22.04 host upgraded from 20.04?

Yes, that’s correct.

Sounds like could be related to this:

1 Like