Can't start Containers

I have just initialized LXD 4.6 (from the pacman repositories) on a Manajaro host

I can launch a container (lxc launch images:debian/10 deb10), but it won’t start

$ lxc start deb10
Error: Failed to run: /usr/bin/lxd forkstart deb10 /var/lib/lxd/containers /var/log/lxd/deb10/lxc.conf: 
Try `lxc info --show-log deb10` for more info

the error does not tell me much unfortunately

$ lxc info --show-log deb10
Name: deb10
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/10/09 19:37 UTC
Status: Stopped
Type: container
Profiles: default

Log:

lxc deb10 20201010081805.280 WARN     cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.deb10"
lxc deb10 20201010081805.281 WARN     cgfsng - cgroups/cgfsng.c:cgroup_tree_create:1168 - File exists - The /sys/fs/cgroup/unified//lxc.payload.deb10 cgroup already existed
lxc deb10 20201010081805.282 WARN     cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.deb10-1"
lxc deb10 20201010081805.289 ERROR    conf - conf.c:lxc_map_ids:2817 - newuidmap failed to write mapping "": newuidmap 7272 0 1000000 1000000000
lxc deb10 20201010081805.289 ERROR    start - start.c:lxc_spawn:1732 - Failed to set up id mapping.
lxc deb10 20201010081805.290 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:849 - Received container state "ABORTING" instead of "RUNNING"
lxc deb10 20201010081805.293 ERROR    start - start.c:__lxc_start:1999 - Failed to spawn container "deb10"
lxc deb10 20201010081805.293 WARN     start - start.c:lxc_abort:1018 - No such process - Failed to send SIGKILL via pidfd 30 for process 7272
lxc deb10 20201010081805.418 ERROR    conf - conf.c:lxc_map_ids:2817 - newuidmap failed to write mapping "": newuidmap 7290 1000000000 0 1 0 1000000 1000000000
lxc deb10 20201010081805.418 ERROR    conf - conf.c:userns_exec_1:4023 - Error setting up {g,u}id mappings for child process "7290"
lxc deb10 20201010081805.419 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_payload_destroy:1048 - No such file or directory - Failed to destroy cgroups
lxc 20201010081805.438 WARN     commands - commands.c:lxc_cmd_rsp_recv:122 - Connection reset by peer - Failed to receive response for command "get_state"

I am also running KVM/QEMU VM’s on that same machine and have noticed that the virt-manager offers to create LXC containers (which I did not try). So I am wondering whether this issue might be due to KVM/qemu interfering with the plain LXD?

Sounds like you need to configure /etc/subuid and /etc/subgid to allow the uid/gid range for LXD containers.

thx for the pointer. The fact that those two files don’t exist seconds your approach I guess.
Actually it’s the first time I become aware of the existence of such files.
I’ll try to figure out how to go about those.

On a remote ubuntu LXD host I see there is an lxd:100000:65536 entry in the /etc/subuid & /etc/subgid. I assume the one I’ll need now should be pretty similar to this.

looks like usermod --add-subuid should be the tool of choice.

Yeah, note that the allocation for lxd: is only there for tracking purposes, the one which actually matters is the one for root:.

Make sure you have the same values in both subuid and subgid, then restart LXD to have it load it.

:thinking: Should we clarify this in the getting started guide?

Also the documentation (https://linuxcontainers.org/lxd/docs/master/userns-idmap) seems a bit too theoretical, maybe it would be good to add concrete examples for both standard and isolated idmaps.

That’s very distribution dependent.

The vast majority of users will never have to do anything about it, either because they’re using the snap which does not use those files or they’re using a distribution that configures it during installation.

:thinking: Ok.

Maybe we could add it to some kind of FAQ or Troubleshooting section, with error message:

Failed to set up id mapping.

based on the arch wiki I have this entry in /etc/subuid & /etc/subgid

root:100000:65536

then I ran usermod --add-subuid 65536-100000 [myusername] aswell as usermod --add-subuid 65536-100000 root

after restarting the lxd service I still get

$ lxc start deb10
Error: Failed to run: /usr/bin/lxd forkstart deb10 /var/lib/lxd/containers /var/log/lxd/deb10/lxc.conf: 
Try `lxc info --show-log deb10` for more info
$ lxc info --show-log deb10
Name: deb10
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/10/09 19:37 UTC
Status: Stopped
Type: container
Profiles: default

Log:

lxc deb10 20201027055533.797 WARN     cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.deb10"
lxc deb10 20201027055533.798 WARN     cgfsng - cgroups/cgfsng.c:cgroup_tree_create:1168 - File exists - The /sys/fs/cgroup/unified//lxc.payload.deb10 cgroup already existed
lxc deb10 20201027055533.798 WARN     cgfsng - cgroups/cgfsng.c:cgroup_tree_create:1168 - File exists - The /sys/fs/cgroup/unified//lxc.payload.deb10-1 cgroup already existed
lxc deb10 20201027055533.799 WARN     cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.deb10-2"
lxc deb10 20201027055533.806 ERROR    conf - conf.c:lxc_map_ids:2817 - newuidmap failed to write mapping "newuidmap: uid range [0-1000000000) -> [1000000-1001000000) not allowed": newuidmap 13123 0 1000000 1000000000
lxc deb10 20201027055533.806 ERROR    start - start.c:lxc_spawn:1732 - Failed to set up id mapping.
lxc deb10 20201027055533.806 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:849 - Received container state "ABORTING" instead of "RUNNING"
lxc deb10 20201027055533.808 ERROR    start - start.c:__lxc_start:1999 - Failed to spawn container "deb10"
lxc deb10 20201027055533.808 WARN     start - start.c:lxc_abort:1018 - No such process - Failed to send SIGKILL via pidfd 30 for process 13123
lxc deb10 20201027055533.955 ERROR    conf - conf.c:lxc_map_ids:2817 - newuidmap failed to write mapping "newuidmap: uid range [0-1000000000) -> [1000000-1001000000) not allowed": newuidmap 13140 1000000000 0 1 0 1000000 1000000000
lxc deb10 20201027055533.955 ERROR    conf - conf.c:userns_exec_1:4023 - Error setting up {g,u}id mappings for child process "13140"
lxc deb10 20201027055533.956 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_payload_destroy:1048 - No such file or directory - Failed to destroy cgroups
lxc 20201027055533.998 WARN     commands - commands.c:lxc_cmd_rsp_recv:122 - Connection reset by peer - Failed to receive response for command "get_state"

so there was no cure from that operation. Any ideas where to go from here?

This looks very wrong.
I recommend you to just follow the arch wiki.
Simply edit the files, instead of using usermod.

1 Like

I made a mistake to try starting existing containers during the process of getting it to work. They always returned error messages as shown.

New containers work as expected. The bad thing is that I can not pin the exact step which made it work now.

thanks everybody for the support

Try:

  • lxc config set NAME security.privileged true
  • lxc start NAME
  • lxc stop NAME
  • lxc config set NAME security.privileged false
  • lxc start NAME

This should force a complete remap (two actually) and should be enough to force the broken containers to get onto the new allowed range of uid/gid.

1 Like

thx, that really good to know. Even though in my case those where only test containers.