Failed to set up id mapping when starting a migrated container to new incus host

I checked the subuid/subgid files in /etc, both look correct far as I can tell:

root@incus1:/home/ansible# cat /etc/subuid
ansible:100000:65536
root:1000000:1000000000

root@incus1:/home/ansible# cat /etc/subgid
ansible:100000:65536
root:1000000:1000000000

Usually that fixes mapping errors.

When I attempt to start the container as root or the ansible user it fails with the log below.

Log:

lxc nextcloud 20250109182216.584 ERROR    conf - ../src/lxc/conf.c:lxc_map_ids:3704 - newuidmap failed to write mapping "newuidmap: uid range [1000-1001) -> [1000-1001) not allowed": newuidmap 3707 0 1000000 1000 1000 1000 1 1001 1001001 999998999
lxc nextcloud 20250109182216.584 ERROR    start - ../src/lxc/start.c:lxc_spawn:1788 - Failed to set up id mapping.
lxc nextcloud 20250109182216.584 ERROR    lxccontainer - ../src/lxc/lxccontainer.c:wait_on_daemonized_start:878 - Received container state "ABORTING" instead of "RUNNING"
lxc nextcloud 20250109182216.584 ERROR    start - ../src/lxc/start.c:__lxc_start:2107 - Failed to spawn container "nextcloud"
lxc nextcloud 20250109182216.584 WARN     start - ../src/lxc/start.c:lxc_abort:1036 - No such process - Failed to send SIGKILL via pidfd 17 for process 3707
lxc 20250109182216.605 ERROR    af_unix - ../src/lxc/af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20250109182216.605 ERROR    commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv_fds:128 - Failed to receive file descriptors for command "get_init_pid"

This is the only container with this issue out of about a dozen migrated from another Incus host. The new host is running 6.0.3 on Debian 12 host OS, the old host is running 6.0.3 on Ubuntu 22.04. Runs fine on the Ubuntu host, so not sure what step I missed. First time I have seen this error in a while.

I also tried: incus config set nextcloud raw.idmap “both 1000 1000”

With no luck. Anyone else run into this and found a solution?
Thanks

Quick fix for now was to remove the uidmap package from the new Debian hosts, container starts fine. I noticed it was not installed on the source hosts and got lucky. Will need to sort out how the UID mapping works at some point. Odd that it only effected the one container.

Try adding root:1000:1 to the /etc/subuid and /etc/subgid files.

Tried that, and no success:

root@incus1:/etc# incus info --show-log ipa01
Name: ipa01
Status: STOPPED
Type: container
Architecture: x86_64
Location: incus2
Created: 2025/01/25 08:32 EST
Last Used: 2025/01/25 08:44 EST

Log:

lxc ipa01 20250125134432.322 ERROR    idmap_utils - ../src/lxc/idmap_utils.c:lxc_map_ids:245 - newuidmap failed to write mapping "newuidmap: write to uid_map failed: Invalid argument": newuidmap 17028 0 1000000 1000000000 0 1000000 65536
lxc ipa01 20250125134432.322 ERROR    start - ../src/lxc/start.c:lxc_spawn:1795 - Failed to set up id mapping.
lxc ipa01 20250125134432.322 ERROR    lxccontainer - ../src/lxc/lxccontainer.c:wait_on_daemonized_start:837 - Received container state "ABORTING" instead of "RUNNING"
lxc ipa01 20250125134432.323 ERROR    start - ../src/lxc/start.c:__lxc_start:2114 - Failed to spawn container "ipa01"
lxc ipa01 20250125134432.323 WARN     start - ../src/lxc/start.c:lxc_abort:1037 - No such process - Failed to send SIGKILL via pidfd 17 for process 17028
lxc 20250125134432.342 ERROR    af_unix - ../src/lxc/af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20250125134432.342 ERROR    commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv_fds:128 - Failed to receive file descriptors for command "get_init_pid"

Sometimes they start, and sometimes they don’t it seems. Really has me stumped, since most containers copy over fine. Maybe a user mismatch from host to host.

Here is the idmap for that container for reference:

volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
 volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'

Was finally able to get the trouble containers started, needed to check for uidmap package on other cluster nodes, and restart incus. - seemed to eliminate that error and containers started.

Now getting another error when trying to shell access 2 containers.

ansible@incus3:~$ incus shell ipa-r1
Error while executing alias expansion: incus exec ipa-r1 -- su -l
Error: fork/exec /usr/libexec/incus/incusd: no such file or directory

Still tracking down if it is a pattern on one host or multiple. Still ironing out this new cluster. Attempting to restart containers is not going so well… Will update.