First of all, this is my system:
- Host OS: NixOS 24.05, Kernel 6.6.43
- Incus version: 6.0.1 (both client and server)
The problem
I’m trying to launch a new container but it appears it has broken idmaps. These are the steps to reproduce the issue:
sh4pe@sh4peux ~> sudo incus launch images:nixos/24.05 filesharing
Launching filesharing
Error: Failed instance creation: Failed to run: /nix/store/n23r8dsnwclxs2llb111r5xrm69pxwsb-incus-lts-6.0.1/bin/incusd forkstart filesharing /var/lib/incus/containers /run/incus/filesharing/lxc.conf: exit status 1
# Note: does not even start after launching
sh4pe@sh4peux ~ [1]> incus start filesharing
Error: Failed to run: /nix/store/n23r8dsnwclxs2llb111r5xrm69pxwsb-incus-lts-6.0.1/bin/incusd forkstart filesharing /var/lib/incus/containers /run/incus/filesharing/lxc.conf: exit status 1
Try `incus info --show-log filesharing` for more info
sh4pe@sh4peux ~ [1]> incus info --show-log filesharing
Name: filesharing
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2024/08/05 21:06 CEST
Last Used: 2024/08/05 21:07 CEST
Log:
lxc filesharing 20240805190713.733 ERROR idmap_utils - ../src/lxc/idmap_utils.c:lxc_map_ids:245 - newuidmap failed to write mapping "newuidmap: write to uid_map failed: Invalid argument": newuidmap 40621 0 1000000 1000000000 0 1000000 65536
lxc filesharing 20240805190713.733 ERROR start - ../src/lxc/start.c:lxc_spawn:1795 - Failed to set up id mapping.
lxc filesharing 20240805190713.734 ERROR lxccontainer - ../src/lxc/lxccontainer.c:wait_on_daemonized_start:837 - Received container state "ABORTING" instead of "RUNNING"
lxc filesharing 20240805190713.738 ERROR start - ../src/lxc/start.c:__lxc_start:2114 - Failed to spawn container "filesharing"
lxc filesharing 20240805190713.739 WARN start - ../src/lxc/start.c:lxc_abort:1037 - No such process - Failed to send SIGKILL via pidfd 45 for process 40621
lxc 20240805190713.121 ERROR af_unix - ../src/lxc/af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20240805190713.121 ERROR commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv_fds:128 - Failed to receive file descriptors for command "get_init_pid"
The newuidmap
invocation looks broken. UID 0 gets mapped to two different ranges. Note that I’ve not yet set any configs:
sh4pe@sh4peux ~> incus config show filesharing
architecture: x86_64
config:
image.architecture: amd64
image.description: Nixos 24.05 amd64 (20240803_01:00)
image.os: Nixos
image.release: "24.05"
image.requirements.secureboot: "false"
image.serial: "20240803_01:00"
image.type: squashfs
image.variant: default
volatile.base_image: 0b126e6c6beddf58f26a1ed94452b82014a39a04ba103593bac02a0a3c65dd38
volatile.cloud-init.instance-id: 8317ccd4-8896-4797-af2b-53a88c846482
volatile.eth0.host_name: vethdd40ff2e
volatile.eth0.hwaddr: 00:16:3e:fc:fe:53
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
volatile.last_state.idmap: '[]'
volatile.last_state.power: STOPPED
volatile.last_state.ready: "false"
volatile.uuid: 778ced25-2f1d-40d2-95c1-22362622f0e6
volatile.uuid.generation: 778ced25-2f1d-40d2-95c1-22362622f0e6
devices: {}
ephemeral: false
profiles:
- default
stateful: false
description: ""
But volatile.idmap.{current,next}
looks weird. Maybe it is my /etc/subuid
?
sh4pe@sh4peux ~> cat /etc/sub{u,g}id
root:1000000:1000000000
root:1000000:1000000000
No, apparently not. The part that map 0
to "Hostid":1000000,"Nsid":0,"Maprange":65536
looks wrong. Where does incus
get these numbers from?
The workaround
I can patch the faulty idmaps away like this:
sh4pe@sh4peux ~> incus query --request PATCH /1.0/instances/filesharing --data '{"config": {"volatile.idmap.next":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]"}}'
sh4pe@sh4peux ~> incus query --request PATCH /1.0/instances/filesharing --data '{"config": {"volatile.idmap.current":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]"}}'
Now incus start filesharing
works and I can log into the running instance.
But I’d rather not have to fix this. So my question is: where does incus
get the bad idmaps from?