Cannot start containers on fresh install

Hi, I’m trying to use LXD on Alpine Linux, its worked in the past but I decided to reinstall for the 5.15 kernel and btrfs id mapping support.
I can’t launch containers on a btrfs storage pool (or dir/ext4 backed). I have set the id mappings in /etc/sub[gu]id as per the Alpine LXD Wiki (to root:100000:65536) and both lxcfs and dbus are running.

lxc info --show-log alpstest
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2021/11/15 18:30 AEDT
Last Used: 2021/11/15 21:39 AEDT

Log:

lxc alpstest 20211115103958.453 INFO     lxccontainer - lxccontainer.c:do_lxcapi_start:987 - Set process title to [lxc monitor] /var/lib/lxd/containers alpstest
lxc alpstest 20211115103958.453 INFO     start - start.c:lxc_check_inherited:328 - Closed inherited fd 4
lxc alpstest 20211115103958.453 INFO     start - start.c:lxc_check_inherited:328 - Closed inherited fd 5
lxc alpstest 20211115103958.453 INFO     start - start.c:lxc_check_inherited:328 - Closed inherited fd 6
lxc alpstest 20211115103958.453 INFO     start - start.c:lxc_check_inherited:328 - Closed inherited fd 13
lxc alpstest 20211115103958.453 INFO     lsm - lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver nop
lxc alpstest 20211115103958.453 INFO     conf - conf.c:run_script_argv:340 - Executing script "/proc/2804/exe callhook /var/lib/lxd "default" "alpstest" start" for container "alpstest"
lxc alpstest 20211115103958.484 ERROR    cgfsng - cgroups/cgfsng.c:initialize_cgroups:3359 - Out of memory - Failed to initialize cgroups
lxc alpstest 20211115103958.484 ERROR    cgroup - cgroups/cgroup.c:cgroup_init:33 - Bad file descriptor - Failed to initialize cgroup driver
lxc alpstest 20211115103958.484 ERROR    start - start.c:lxc_init:864 - Failed to initialize cgroup driver
lxc alpstest 20211115103958.484 ERROR    start - start.c:__lxc_start:2002 - Failed to initialize container "alpstest"
lxc alpstest 20211115103958.484 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/sbin/lxd callhook /var/lib/lxd "default" "alpstest" stopns" for container "alpstest"
lxc alpstest 20211115104028.416 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "alpstest"
lxc alpstest 20211115104028.918 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/sbin/lxd callhook /var/lib/lxd "default" "alpstest" stop" for container "alpstest"
lxc alpstest 20211115104028.955 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:859 - No such file or directory - Failed to receive the container state
lxc 20211115104028.956 ERROR    af_unix - af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20211115104028.956 ERROR    commands - commands.c:lxc_cmd_rsp_recv_fds:127 - Failed to receive file descriptors
LXD with the debug flag
INFO[11-15|22:04:38] LXD is starting                          version=4.20 mode=normal path=/var/lib/lxd
INFO[11-15|22:04:38] Kernel uid/gid map: 
INFO[11-15|22:04:38]  - u 0 0 4294967295 
INFO[11-15|22:04:38]  - g 0 0 4294967295 
INFO[11-15|22:04:38] Configured LXD uid/gid map: 
INFO[11-15|22:04:38]  - u 0 100000 65536 
INFO[11-15|22:04:38]  - g 0 100000 65536 
WARN[11-15|22:04:38] AppArmor support has been disabled because of lack of kernel support 
INFO[11-15|22:04:38] Kernel features: 
INFO[11-15|22:04:38]  - closing multiple file descriptors efficiently: yes 
INFO[11-15|22:04:38]  - netnsid-based network retrieval: yes 
INFO[11-15|22:04:38]  - pidfds: yes 
INFO[11-15|22:04:38]  - core scheduling: yes 
INFO[11-15|22:04:38]  - uevent injection: yes 
INFO[11-15|22:04:38]  - seccomp listener: yes 
INFO[11-15|22:04:38]  - seccomp listener continue syscalls: yes 
INFO[11-15|22:04:38]  - seccomp listener add file descriptors: yes 
INFO[11-15|22:04:38]  - attach to namespaces via pidfds: yes 
INFO[11-15|22:04:38]  - safe native terminal allocation : yes 
INFO[11-15|22:04:38]  - unprivileged file capabilities: yes 
INFO[11-15|22:04:38]  - cgroup layout: disabled 
WARN[11-15|22:04:38]  - AppArmor support has been disabled, Disabled because of lack of kernel support 
WARN[11-15|22:04:38]  - Couldn't find the CGroup blkio, disk I/O limits will be ignored 
WARN[11-15|22:04:38]  - Couldn't find the CGroup blkio.weight, disk priority will be ignored 
WARN[11-15|22:04:38]  - Couldn't find the CGroup CPU controller, CPU time limits will be ignored 
WARN[11-15|22:04:38]  - Couldn't find the CGroup CPUacct controller, CPU accounting will not be available 
WARN[11-15|22:04:38]  - Couldn't find the CGroup CPU controller, CPU pinning will be ignored 
WARN[11-15|22:04:38]  - Couldn't find the CGroup devices controller, device access control won't work 
WARN[11-15|22:04:38]  - Couldn't find the CGroup freezer controller, pausing/resuming containers won't work 
WARN[11-15|22:04:38]  - Couldn't find the CGroup hugetlb controller, hugepage limits will be ignored 
WARN[11-15|22:04:38]  - Couldn't find the CGroup memory controller, memory limits will be ignored 
WARN[11-15|22:04:38]  - Couldn't find the CGroup network priority controller, network priority will be ignored 
WARN[11-15|22:04:38]  - Couldn't find the CGroup pids controller, process limits will be ignored 
WARN[11-15|22:04:38]  - Couldn't find the CGroup memory swap accounting, swap limits will be ignored 
INFO[11-15|22:04:38]  - shiftfs support: no 
WARN[11-15|22:04:38] Instance type not operational            err="QEMU command not available for architecture" type=virtual-machine driver=qemu
INFO[11-15|22:04:38] Initializing local database 
DBUG[11-15|22:04:38] Refreshing local trusted certificate cache 
INFO[11-15|22:04:38] Set client certificate to server certificate fingerprint=2a400be68206bb35b9c32c493ddd31991edc9d717217de3120d1013ebfc0a237
DBUG[11-15|22:04:38] Initializing database gateway 
INFO[11-15|22:04:38] Starting database node                   id=1 address=1 role=voter
INFO[11-15|22:04:38] Starting /dev/lxd handler: 
INFO[11-15|22:04:38]  - binding devlxd socket                 socket=/var/lib/lxd/devlxd/sock
INFO[11-15|22:04:38] REST API daemon: 
INFO[11-15|22:04:38]  - binding Unix socket                   socket=/var/lib/lxd/unix.socket
INFO[11-15|22:04:38] Initializing global database 
INFO[11-15|22:04:38] Connecting to global database 
DBUG[11-15|22:04:38] Dqlite: attempt 1: server 1: connected 
INFO[11-15|22:04:38] Connected to global database 
INFO[11-15|22:04:38] Initialized global database 
DBUG[11-15|22:04:38] Firewall detected "nftables" incompatibility: Backend command "nft" missing 
DBUG[11-15|22:04:38] Firewall detected "xtables" incompatibility: Backend command "ebtables" is an nftables shim 
WARN[11-15|22:04:38] Firewall failed to detect any compatible driver, falling back to "xtables" (but some features may not work as expected due to: Backend command "ebtables" is an nftables shim) 
INFO[11-15|22:04:38] Firewall loaded driver                   driver=xtables
INFO[11-15|22:04:38] Initializing storage pools 
DBUG[11-15|22:04:38] Initializing and checking storage pool   pool=hostfs
DBUG[11-15|22:04:38] Mount started                            driver=btrfs pool=hostfs
DBUG[11-15|22:04:38] Mount finished                           driver=btrfs pool=hostfs
INFO[11-15|22:04:38] Initializing daemon storage mounts 
INFO[11-15|22:04:38] Loading daemon configuration 
INFO[11-15|22:04:38] Initializing networks 
DBUG[11-15|22:04:38] New task Operation: 95866e4f-6d01-49a2-899d-f403436c7c47 
INFO[11-15|22:04:38] Pruning leftover image files 
DBUG[11-15|22:04:38] Started task operation: 95866e4f-6d01-49a2-899d-f403436c7c47 
INFO[11-15|22:04:38] Done pruning leftover image files 
INFO[11-15|22:04:38] Starting device monitor 
WARN[11-15|22:04:38] Failed to initialize fanotify, falling back on fsnotify err="Failed to initialize fanotify: function not implemented"
DBUG[11-15|22:04:38] Success for task operation: 95866e4f-6d01-49a2-899d-f403436c7c47 
DBUG[11-15|22:04:38] Initialized filesystem monitor           path=/dev
DBUG[11-15|22:04:38] Registering running instances 
INFO[11-15|22:04:38] Started seccomp handler                  path=/var/lib/lxd/seccomp.socket
DBUG[11-15|22:04:38] Refreshing trusted certificate cache 
DBUG[11-15|22:04:38] New task Operation: e51667a1-f8e3-4751-a65a-54c079003466 
INFO[11-15|22:04:38] Pruning expired images 
DBUG[11-15|22:04:38] Started task operation: e51667a1-f8e3-4751-a65a-54c079003466 
INFO[11-15|22:04:38] Done pruning expired images 
DBUG[11-15|22:04:38] New task Operation: 9ced27e8-35ab-4ff4-8e70-750f77d57376 
INFO[11-15|22:04:38] Pruning expired instance backups 
DBUG[11-15|22:04:38] Started task operation: 9ced27e8-35ab-4ff4-8e70-750f77d57376 
DBUG[11-15|22:04:38] Success for task operation: e51667a1-f8e3-4751-a65a-54c079003466 
INFO[11-15|22:04:38] Done pruning expired instance backups 
DBUG[11-15|22:04:38] Success for task operation: 9ced27e8-35ab-4ff4-8e70-750f77d57376 
DBUG[11-15|22:04:38] New task Operation: a224ccbe-53ed-4917-8f9b-20992207439b 
INFO[11-15|22:04:38] Updating images 
DBUG[11-15|22:04:38] Started task operation: a224ccbe-53ed-4917-8f9b-20992207439b 
DBUG[11-15|22:04:38] New task Operation: c01d679d-bf6d-40fd-9379-47a055faf17b 
INFO[11-15|22:04:38] Done updating images 
INFO[11-15|22:04:38] Daemon started 
DBUG[11-15|22:04:38] New task Operation: 7e63de47-d5a3-44a7-8010-15a6684222a9 
DBUG[11-15|22:04:38] New task Operation: c65075da-5029-4208-b630-d35b25b254c9 
INFO[11-15|22:04:38] Pruning resolved warnings 
DBUG[11-15|22:04:38] Started task operation: 7e63de47-d5a3-44a7-8010-15a6684222a9 
INFO[11-15|22:04:38] Expiring log files 
DBUG[11-15|22:04:38] Started task operation: c65075da-5029-4208-b630-d35b25b254c9 
INFO[11-15|22:04:38] Done pruning resolved warnings 
INFO[11-15|22:04:38] Done expiring log files 
INFO[11-15|22:04:38] Updating instance types 
DBUG[11-15|22:04:38] Started task operation: c01d679d-bf6d-40fd-9379-47a055faf17b 
INFO[11-15|22:04:38] Done updating instance types 
DBUG[11-15|22:04:38] Success for task operation: 7e63de47-d5a3-44a7-8010-15a6684222a9 
DBUG[11-15|22:04:38] Success for task operation: c65075da-5029-4208-b630-d35b25b254c9 
DBUG[11-15|22:04:38] Processing image                         protocol=simplestreams alias=alpine/3.14 fingerprint=b4f9d3d2986d97333be5dca5c5ae9999c270d67119109222f1c0f6b6eada246f server=https://images.linuxcontainers.org
DBUG[11-15|22:04:38] Connecting to a remote simplestreams server URL=https://images.linuxcontainers.org
DBUG[11-15|22:04:38] Acquiring lock for image download of "b4f9d3d2986d97333be5dca5c5ae9999c270d67119109222f1c0f6b6eada246f" 
DBUG[11-15|22:04:38] Lock acquired for image download of "b4f9d3d2986d97333be5dca5c5ae9999c270d67119109222f1c0f6b6eada246f" 
DBUG[11-15|22:04:38] Image already exists in the DB           fingerprint=b4f9d3d2986d97333be5dca5c5ae9999c270d67119109222f1c0f6b6eada246f
DBUG[11-15|22:04:38] Image already exists on storage pool     fingerprint=b4f9d3d2986d97333be5dca5c5ae9999c270d67119109222f1c0f6b6eada246f pool=hostfs
DBUG[11-15|22:04:38] Image already up to date                 fingerprint=b4f9d3d2986d97333be5dca5c5ae9999c270d67119109222f1c0f6b6eada246f
DBUG[11-15|22:04:38] Success for task operation: a224ccbe-53ed-4917-8f9b-20992207439b 
DBUG[11-15|22:04:44] Success for task operation: c01d679d-bf6d-40fd-9379-47a055faf17b

Concerningly cgroups seems to be disabled and shiftfs is not enabled either which was the point of 5.15 kernel for me.

You won’t get shiftfs on non-Ubuntu kernels but that’s okay because with 5.15 you won’t need it so long as you use ext4, xfs or vfat for your containers.

The complete lack of cgroups is the problem in this case though, you either need individual cgroup controllers (cgroup1) mounted under /sys/fs/cgroup or you need a full cgroup2 unified hierarchy mounted there, it sounds like your system has neither.

Hmm ok, I’ll have a look into why Alpine isn’t enabling cgroups by default.
I will be able to use VFS with BTRFS in 5.15 right?
does it matter which cgroup architecture I enable (v1 or v2)? or will LXD automatically pick one.
Thanks so much for your help

Ah yeah, that’s right, 5.15 introduces the btrfs idmap shifting, so you’ll be fine on that filesystem too.

LXD handles both cgroup1 and cgroup2 so it doesn’t really matter.
It does matter if you need to run old distros in containers (Centos 7, Ubuntu 16.04, …) as those don’t support cgroup2 and will fail to boot.

Alright so I’ve enabled cgroups (hybrid) but I’m still running into issues launching the container on a btrfs storage pool.

lxd --debug
INFO[11-16|20:17:57] LXD is starting                          version=4.20 mode=normal path=/var/lib/lxd
INFO[11-16|20:17:57] Kernel uid/gid map: 
INFO[11-16|20:17:57]  - u 0 0 4294967295 
INFO[11-16|20:17:57]  - g 0 0 4294967295 
INFO[11-16|20:17:57] Configured LXD uid/gid map: 
INFO[11-16|20:17:57]  - u 0 100000 65536 
INFO[11-16|20:17:57]  - g 0 100000 65536 
WARN[11-16|20:17:57] AppArmor support has been disabled because of lack of kernel support 
INFO[11-16|20:17:57] Kernel features: 
INFO[11-16|20:17:57]  - closing multiple file descriptors efficiently: yes 
INFO[11-16|20:17:57]  - netnsid-based network retrieval: yes 
INFO[11-16|20:17:57]  - pidfds: yes 
INFO[11-16|20:17:57]  - core scheduling: yes 
INFO[11-16|20:17:57]  - uevent injection: yes 
INFO[11-16|20:17:57]  - seccomp listener: yes 
INFO[11-16|20:17:57]  - seccomp listener continue syscalls: yes 
INFO[11-16|20:17:57]  - seccomp listener add file descriptors: yes 
INFO[11-16|20:17:57]  - attach to namespaces via pidfds: yes 
INFO[11-16|20:17:57]  - safe native terminal allocation : yes 
INFO[11-16|20:17:57]  - unprivileged file capabilities: yes 
INFO[11-16|20:17:57]  - cgroup layout: hybrid 
WARN[11-16|20:17:57]  - AppArmor support has been disabled, Disabled because of lack of kernel support 
WARN[11-16|20:17:57]  - Couldn't find the CGroup blkio.weight, disk priority will be ignored 
INFO[11-16|20:17:57]  - shiftfs support: no 
WARN[11-16|20:17:57] Instance type not operational            err="QEMU command not available for architecture" type=virtual-machine driver=qemu
INFO[11-16|20:17:57] Initializing local database 
DBUG[11-16|20:17:57] Refreshing local trusted certificate cache 
INFO[11-16|20:17:57] Set client certificate to server certificate fingerprint=8f4f36d54cc081a1ba2c25feceecf89a92b219b224165e932a6673ebbf619d42
DBUG[11-16|20:17:57] Initializing database gateway 
INFO[11-16|20:17:57] Starting database node                   id=1 address=1 role=voter
INFO[11-16|20:17:57] Starting /dev/lxd handler: 
INFO[11-16|20:17:57]  - binding devlxd socket                 socket=/var/lib/lxd/devlxd/sock
INFO[11-16|20:17:57] REST API daemon: 
INFO[11-16|20:17:57]  - binding Unix socket                   socket=/var/lib/lxd/unix.socket
INFO[11-16|20:17:57] Initializing global database 
INFO[11-16|20:17:57] Connecting to global database 
DBUG[11-16|20:17:57] Dqlite: attempt 1: server 1: connected 
INFO[11-16|20:17:57] Connected to global database 
INFO[11-16|20:17:57] Initialized global database 
DBUG[11-16|20:17:57] Firewall detected "nftables" incompatibility: Backend command "nft" missing 
DBUG[11-16|20:17:57] Firewall detected "xtables" incompatibility: Backend command "ebtables" is an nftables shim 
WARN[11-16|20:17:57] Firewall failed to detect any compatible driver, falling back to "xtables" (but some features may not work as expected due to: Backend command "ebtables" is an nftables shim) 
INFO[11-16|20:17:57] Firewall loaded driver                   driver=xtables
INFO[11-16|20:17:57] Initializing storage pools 
DBUG[11-16|20:17:57] Initializing and checking storage pool   pool=hostfs
DBUG[11-16|20:17:57] Mount started                            driver=btrfs pool=hostfs
DBUG[11-16|20:17:57] Mount finished                           driver=btrfs pool=hostfs
INFO[11-16|20:17:57] Initializing daemon storage mounts 
INFO[11-16|20:17:57] Loading daemon configuration 
INFO[11-16|20:17:57] Initializing networks 
DBUG[11-16|20:17:57] New task Operation: 4572961e-f525-4eb9-ad51-85efc67b91a6 
INFO[11-16|20:17:57] Pruning leftover image files 
DBUG[11-16|20:17:57] Started task operation: 4572961e-f525-4eb9-ad51-85efc67b91a6 
INFO[11-16|20:17:57] Done pruning leftover image files 
INFO[11-16|20:17:57] Starting device monitor 
WARN[11-16|20:17:57] Failed to initialize fanotify, falling back on fsnotify err="Failed to initialize fanotify: function not implemented"
DBUG[11-16|20:17:57] Success for task operation: 4572961e-f525-4eb9-ad51-85efc67b91a6 
DBUG[11-16|20:17:57] Initialized filesystem monitor           path=/dev
DBUG[11-16|20:17:57] Registering running instances 
INFO[11-16|20:17:57] Started seccomp handler                  path=/var/lib/lxd/seccomp.socket
DBUG[11-16|20:17:57] Refreshing trusted certificate cache 
DBUG[11-16|20:17:57] New task Operation: 41fd64e1-5d88-4a59-8697-4671103d54f4 
INFO[11-16|20:17:57] Pruning expired images 
DBUG[11-16|20:17:57] Started task operation: 41fd64e1-5d88-4a59-8697-4671103d54f4 
INFO[11-16|20:17:57] Done pruning expired images 
DBUG[11-16|20:17:57] New task Operation: c9771e46-8049-4014-9f77-b9acd065248a 
INFO[11-16|20:17:57] Pruning expired instance backups 
DBUG[11-16|20:17:57] Started task operation: c9771e46-8049-4014-9f77-b9acd065248a 
DBUG[11-16|20:17:57] Success for task operation: 41fd64e1-5d88-4a59-8697-4671103d54f4 
INFO[11-16|20:17:57] Done pruning expired instance backups 
DBUG[11-16|20:17:57] Success for task operation: c9771e46-8049-4014-9f77-b9acd065248a 
DBUG[11-16|20:17:57] New task Operation: 73ce3bdf-f4b2-49fb-b345-6e1196202b49 
DBUG[11-16|20:17:57] New task Operation: 9998a265-51a1-4743-96a1-3df55bd32b5c 
INFO[11-16|20:17:57] Updating images 
DBUG[11-16|20:17:57] Started task operation: 73ce3bdf-f4b2-49fb-b345-6e1196202b49 
DBUG[11-16|20:17:57] New task Operation: 2650ddc6-857d-4641-8a1a-ee4499922d14 
INFO[11-16|20:17:57] Expiring log files 
DBUG[11-16|20:17:57] Started task operation: 9998a265-51a1-4743-96a1-3df55bd32b5c 
INFO[11-16|20:17:57] Daemon started 
DBUG[11-16|20:17:57] New task Operation: fac7c1a6-87a4-4f73-809e-c39042d47459 
INFO[11-16|20:17:57] Updating instance types 
DBUG[11-16|20:17:57] Started task operation: fac7c1a6-87a4-4f73-809e-c39042d47459 
INFO[11-16|20:17:57] Done updating images 
INFO[11-16|20:17:57] Done expiring log files 
DBUG[11-16|20:17:57] Success for task operation: 9998a265-51a1-4743-96a1-3df55bd32b5c 
INFO[11-16|20:17:57] Done updating instance types 
INFO[11-16|20:17:57] Pruning resolved warnings 
DBUG[11-16|20:17:57] Started task operation: 2650ddc6-857d-4641-8a1a-ee4499922d14 
DBUG[11-16|20:17:57] Processing image                         protocol=simplestreams alias=alpine/3.14 fingerprint=839527fec3fd32b314f00539e49e6e61ce12e2d186710ad82857a7bed13adf2d server=https://images.linuxcontainers.org
DBUG[11-16|20:17:57] Connecting to a remote simplestreams server URL=https://images.linuxcontainers.org
INFO[11-16|20:17:57] Done pruning resolved warnings 
DBUG[11-16|20:17:57] Success for task operation: 2650ddc6-857d-4641-8a1a-ee4499922d14 
DBUG[11-16|20:17:57] Acquiring lock for image download of "839527fec3fd32b314f00539e49e6e61ce12e2d186710ad82857a7bed13adf2d" 
DBUG[11-16|20:17:57] Lock acquired for image download of "839527fec3fd32b314f00539e49e6e61ce12e2d186710ad82857a7bed13adf2d" 
DBUG[11-16|20:17:57] Image already exists in the DB           fingerprint=839527fec3fd32b314f00539e49e6e61ce12e2d186710ad82857a7bed13adf2d
DBUG[11-16|20:17:57] Image already exists on storage pool     fingerprint=839527fec3fd32b314f00539e49e6e61ce12e2d186710ad82857a7bed13adf2d pool=hostfs
DBUG[11-16|20:17:57] Image already up to date                 fingerprint=839527fec3fd32b314f00539e49e6e61ce12e2d186710ad82857a7bed13adf2d
DBUG[11-16|20:17:57] Success for task operation: 73ce3bdf-f4b2-49fb-b345-6e1196202b49 
DBUG[11-16|20:18:03] Success for task operation: fac7c1a6-87a4-4f73-809e-c39042d47459
lxc info --show-log alps
Name: alps
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2021/11/16 20:01 AEDT
Last Used: 2021/11/16 20:16 AEDT

Log:

lxc alps 20211116091608.731 ERROR    conf - conf.c:lxc_map_ids:3654 - newuidmap failed to write mapping "newuidmap: uid range [0-1000000000) -> [1000000-1001000000) not allowed": newuidmap 3735 0 1000000 1000000000
lxc alps 20211116091608.731 ERROR    start - start.c:lxc_spawn:1785 - Failed to set up id mapping.
lxc alps 20211116091608.731 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:867 - Received container state "ABORTING" instead of "RUNNING"
lxc alps 20211116091608.731 ERROR    start - start.c:__lxc_start:2068 - Failed to spawn container "alps"
lxc alps 20211116091608.731 WARN     start - start.c:lxc_abort:1038 - No such process - Failed to send SIGKILL via pidfd 43 for process 3735
lxc 20211116091613.774 ERROR    af_unix - af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20211116091613.774 ERROR    commands - commands.c:lxc_cmd_rsp_recv_fds:127 - Failed to receive file descriptors
lxc 20211116091757.676 TRACE    commands - commands.c:lxc_cmd:509 - Connection refused - Command "get_state" failed to connect command socket
lxc 20211116091757.751 TRACE    commands - commands.c:lxc_cmd:509 - Connection refused - Command "get_state" failed to connect command socket
lxc 20211116091820.921 TRACE    commands - commands.c:lxc_cmd:509 - Connection refused - Command "get_state" failed to connect command socket

I created a dir backed storage pool and launched a container which seems to work.

lxc info --show-log dirtest
Name: dirtest
Status: RUNNING
Type: container
Architecture: x86_64
PID: 4293
Created: 2021/11/16 20:23 AEDT
Last Used: 2021/11/16 20:23 AEDT

Resources:
  Processes: 4
  CPU usage:
    CPU usage (in seconds): 0
  Memory usage:
    Memory (current): 1.54MiB
    Memory (peak): 4.68MiB
  Network usage:
    lo:
      Type: loopback
      State: UP
      MTU: 65536
      Bytes received: 0B
      Bytes sent: 0B
      Packets received: 0
      Packets sent: 0
      IP addresses:
        inet:  127.0.0.1/8 (local)
        inet6: ::1/128 (local)

Log:

lxc dirtest 20211116092349.371 WARN     cgfsng - cgroups/cgfsng.c:fchowmodat:1251 - No such file or directory - Failed to fchownat(42, memory.oom.group, 65536, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
is this potentially due to the new VFS mapping with btrfs? Do I need to configure anything for it to work? Thank you so much for your help and the awesome tool.

Solution here Cannot start containers on btrfs

1 Like