Failed to set up id mapping when starting a migrated container to new incus host

I checked the subuid/subgid files in /etc, both look correct far as I can tell:

root@incus1:/home/ansible# cat /etc/subuid
ansible:100000:65536
root:1000000:1000000000

root@incus1:/home/ansible# cat /etc/subgid
ansible:100000:65536
root:1000000:1000000000

Usually that fixes mapping errors.

When I attempt to start the container as root or the ansible user it fails with the log below.

Log:

lxc nextcloud 20250109182216.584 ERROR    conf - ../src/lxc/conf.c:lxc_map_ids:3704 - newuidmap failed to write mapping "newuidmap: uid range [1000-1001) -> [1000-1001) not allowed": newuidmap 3707 0 1000000 1000 1000 1000 1 1001 1001001 999998999
lxc nextcloud 20250109182216.584 ERROR    start - ../src/lxc/start.c:lxc_spawn:1788 - Failed to set up id mapping.
lxc nextcloud 20250109182216.584 ERROR    lxccontainer - ../src/lxc/lxccontainer.c:wait_on_daemonized_start:878 - Received container state "ABORTING" instead of "RUNNING"
lxc nextcloud 20250109182216.584 ERROR    start - ../src/lxc/start.c:__lxc_start:2107 - Failed to spawn container "nextcloud"
lxc nextcloud 20250109182216.584 WARN     start - ../src/lxc/start.c:lxc_abort:1036 - No such process - Failed to send SIGKILL via pidfd 17 for process 3707
lxc 20250109182216.605 ERROR    af_unix - ../src/lxc/af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20250109182216.605 ERROR    commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv_fds:128 - Failed to receive file descriptors for command "get_init_pid"

This is the only container with this issue out of about a dozen migrated from another Incus host. The new host is running 6.0.3 on Debian 12 host OS, the old host is running 6.0.3 on Ubuntu 22.04. Runs fine on the Ubuntu host, so not sure what step I missed. First time I have seen this error in a while.

I also tried: incus config set nextcloud raw.idmap “both 1000 1000”

With no luck. Anyone else run into this and found a solution?
Thanks

Quick fix for now was to remove the uidmap package from the new Debian hosts, container starts fine. I noticed it was not installed on the source hosts and got lucky. Will need to sort out how the UID mapping works at some point. Odd that it only effected the one container.

Try adding root:1000:1 to the /etc/subuid and /etc/subgid files.