I have a container host running incus 6.0.3 (Zabbly packages) under Ubuntu 22.04.5. I am running a bunch of containers, none of which is privileged (AFAIK).
However, when viewed on the host, some of the containers have uid mapped filesystems, and some don’t. For example:
root@nuc3:~# ls -ld /var/lib/incus/storage-pools/default/containers/*/rootfs/etc
drwxr-xr-x 80 1000000 1000000 158 Apr 9 06:22 /var/lib/incus/storage-pools/default/containers/apt-cacher/rootfs/etc
drwxr-xr-x 105 root root 200 Apr 10 06:08 /var/lib/incus/storage-pools/default/containers/builder/rootfs/etc
drwxr-xr-x 87 root root 164 Apr 11 13:25 /var/lib/incus/storage-pools/default/containers/cache1/rootfs/etc
drwxr-xr-x 86 1000000 1000000 163 Apr 9 06:35 /var/lib/incus/storage-pools/default/containers/cache2/rootfs/etc
...
Let’s take cache1
and cache2
. Comparing the configs, I see:
root@nuc3:~# incus config show cache1 -e >cache1.conf
root@nuc3:~# incus config show cache2 -e >cache2.conf
root@nuc3:~# diff -u cache1.conf cache2.conf
--- cache1.conf 2025-04-13 08:18:47.645904417 +0000
+++ cache2.conf 2025-04-13 08:18:52.040917019 +0000
@@ -15,8 +15,8 @@
dhcp4: false
accept-ra: false
addresses:
- - 10.12.255.53/24
- - XXXX:XXXX:XXXX:XXff::53/64
+ - 10.12.255.54/24
+ - XXXX:XXXX:XXXX:XXff::54/64
gateway4: 10.12.255.1
gateway6: XXXX:XXXX:XXXX:XXff::1
nameservers:
@@ -56,16 +56,16 @@
##- systemctl restart apparmor
- systemctl restart ssh || true
volatile.base_image: 3ce8ee53605522cf813887ceb36874eeab3e655ea32a285b3820db7f4729b04e
- volatile.cloud-init.instance-id: 066e7edb-ebd6-4ad9-a607-7c0ef8761747
- volatile.eth0.host_name: vetheaf526ca
- volatile.eth0.hwaddr: 00:16:3e:35:72:b9
+ volatile.cloud-init.instance-id: f4c0776e-3912-42fd-a872-c5701fcd3967
+ volatile.eth0.host_name: vetha08f7cbe
+ volatile.eth0.hwaddr: 00:16:3e:4d:b8:e4
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
- volatile.last_state.idmap: '[]'
+ volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.power: RUNNING
- volatile.uuid: c0e49a3f-1396-438a-ae24-a63cbe919f6b
- volatile.uuid.generation: c0e49a3f-1396-438a-ae24-a63cbe919f6b
+ volatile.uuid: 61053f82-c761-4738-bcf3-babc13074da9
+ volatile.uuid.generation: 61053f82-c761-4738-bcf3-babc13074da9
devices:
eth0:
name: eth0
The essential difference seems to be volatile.last_state.idmap
which is an empty array for cache1 (which has non-mapped IDs).
However, volatile.idmap.current
and volatile.idmap.next
are the same. Stopping and starting cache1 makes no difference, and incus info cache1 --show-log
shows an empty log.
Now, these containers were copied a while back from two different incus hosts (which no longer exist - they were called nuc1
and nuc2
). It seems that the containers copied from nuc1
ended up with the unmapped filesystem.
But how can I find from the current container state what is causing id mapping to be disabled for these containers? And how to fix those containers which are currently unmapped?