UID GID is Numeric Inside Containers

I LXC copied a few CTS from host A to host B.

On host B (ttarget), one of the copied CTs worked fine from the start, but the others wouldn’t start, showing

Permission denied - Failed to open "/var/snap/lxd/common/lxd/storage-pools/default/containers/

I looked into their /1.0/instances/ctXXX/logs/lxc.conf, the only thing that differed was that the ones that wouldn’t start did not have this line at the end, while the one that worked did.

rootfs.options=idmap=container

so i compared with config show and noticed that the ones that wouldn’t start had this value for volatile.last_state.idmap [{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}], while the ct that was able to start had '[]'

I issued lxc config set ctXXX volatile.last_state.idmap '[]' for the ones that wouldn’t start, and they finally started. Everything seems fine, except for one thing: the UID/GID inside the containers that previously woulnd’t start now shows 1001000 1001000 (now those cts also have rootfs.options=idmap=container)

the CTs are standard Ubuntu, from the default template. The host is ubuntu 22.04. They both (host + CTs, including the CT that always worked) have

$ cat /etc/sub{uid,gid}
ubuntu:100000:65536
ubuntu:100000:65536

I already tried reloading the snap, didn’t help.

Host B (target, everything is using DIR driver)

driver: lxc | qemu
  driver_version: 5.0.1 | 7.1.0
  firewall: xtables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    shiftfs: "false"
    uevent_injection: "true"
    unpriv_fscaps: "true"
  kernel_version: 5.15.0-53-generic
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Ubuntu
  os_version: "22.04"
  project: default
  server: lxd
  server_clustered: false
  server_event_mode: full-mesh
  server_name: node3
  server_pid: 39148
  server_version: "5.8"
  storage: dir
  storage_version: "1"

Host A (source, also using dir driver)

  driver: lxc | qemu
  driver_version: 5.0.1 | 7.1.0
  firewall: xtables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    shiftfs: "false"
    uevent_injection: "true"
    unpriv_fscaps: "true"
  kernel_version: 5.15.0-47-generic
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Ubuntu
  os_version: "22.04"
  project: default
  server: lxd
  server_clustered: false
  server_event_mode: full-mesh
  server_name: node16
  server_pid: 1961418
  server_version: "5.8"
  storage: dir
  storage_version: "1"

How do I fix the mappings inside the CTs that are now showing numeric IDs!?

I really need some help with this :nauseated_face:

$ lxc exec myct -- sh -c "cat /etc/subuid; cat /etc/passwd | egrep 'root|ubuntu:'; cat /etc/group | grep 'ubuntu:'; ls -lha /home/ubuntu"
ubuntu:100000:65536
root:x:0:0:root:/root:/bin/bash
ubuntu:x:1000:1000:Ubuntu:/home/ubuntu:/bin/bash
ubuntu:x:1000:
total 116K
drwxr-xr-x 13 1001000 1001000 4.0K Oct 17 18:23 .
drwxr-xr-x  3 1000000 1000000 4.0K Sep 22  2021 ..
-rw-r--r--  1 1001000 1001000  956 Jul  7 20:24 .bash_aliases

the mappings are correct, but they don’t translate

So because the original issue wasn’t resolved, and instead you manually changed the mapping settings to now use the idmapped mounts feature, it means that the actual files on disk are still retaining their static ownership mappings, and when this is dynamically passed through to the container with the idmapped mount settings they are effectively being ID shifted twice.

To resolve this you need to either restore the original mapping settings or correct the ownership of the files on disk back to their unshifted values (i.e 1000000 to 0 etc).

1 Like

I was able to shift a container that was previously statically shifted back to not being shifted by doing:

Previously statically shifted container:

lxc config show c1
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu jammy amd64 (20221204_07:42)
  image.os: Ubuntu
  image.release: jammy
  image.serial: "20221204_07:42"
  image.type: squashfs
  image.variant: default
  volatile.base_image: 1015dfba73874bc0813b11bff37dc0ef6d7ef49a2a15721fdf5dfad7170150c8
  volatile.cloud-init.instance-id: d8c45281-efb2-4f90-a723-e50a88319fd7
  volatile.eth0.hwaddr: 00:16:3e:61:88:f6
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: STOPPED
  volatile.last_state.ready: "false"
  volatile.uuid: b1b00fac-ef9b-4cbb-bbaa-785fc9a21fbc
devices:
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

sudo ls /var/lib/lxd/storage-pools/default/containers/c1/rootfs -lt
total 60
drwxrwxrwt  7 1000000 1000000 4096 Dec  5 12:25 tmp
drwxr-xr-x  2 1000000 1000000 4096 Dec  4 07:48 dev
drwxr-xr-x 62 1000000 1000000 4096 Dec  4 07:45 etc
drwxr-xr-x  2 1000000 1000000 4096 Dec  4 07:45 run
drwxr-xr-x  3 1000000 1000000 4096 Dec  4 07:45 home
drwxr-xr-x 12 1000000 1000000 4096 Dec  4 07:44 var
drwxr-xr-x  2 1000000 1000000 4096 Dec  4 07:43 media
drwxr-xr-x  2 1000000 1000000 4096 Dec  4 07:43 mnt
drwxr-xr-x  2 1000000 1000000 4096 Dec  4 07:43 opt
drwx------  2 1000000 1000000 4096 Dec  4 07:43 root
drwxr-xr-x  2 1000000 1000000 4096 Dec  4 07:43 srv
drwxr-xr-x 14 1000000 1000000 4096 Dec  4 07:43 usr
lrwxrwxrwx  1 1000000 1000000    7 Dec  4 07:43 bin -> usr/bin
lrwxrwxrwx  1 1000000 1000000    7 Dec  4 07:43 lib -> usr/lib
lrwxrwxrwx  1 1000000 1000000    9 Dec  4 07:43 lib32 -> usr/lib32
lrwxrwxrwx  1 1000000 1000000    9 Dec  4 07:43 lib64 -> usr/lib64
lrwxrwxrwx  1 1000000 1000000   10 Dec  4 07:43 libx32 -> usr/libx32
lrwxrwxrwx  1 1000000 1000000    8 Dec  4 07:43 sbin -> usr/sbin
drwxr-xr-x  2 1000000 1000000 4096 Apr 18  2022 boot
drwxr-xr-x  2 1000000 1000000 4096 Apr 18  2022 proc
drwxr-xr-x  2 1000000 1000000 4096 Apr 18  2022 sys

Remove all shifting temporarily (dangerous as means processes then start as privileged on the host, don’t leave like this):

lxc config set c1 volatile.idmap.next='[]'

lxc start c1
lxc config show c1
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu jammy amd64 (20221204_07:42)
  image.os: Ubuntu
  image.release: jammy
  image.serial: "20221204_07:42"
  image.type: squashfs
  image.variant: default
  volatile.base_image: 1015dfba73874bc0813b11bff37dc0ef6d7ef49a2a15721fdf5dfad7170150c8
  volatile.cloud-init.instance-id: d8c45281-efb2-4f90-a723-e50a88319fd7
  volatile.eth0.host_name: veth244e50ed
  volatile.eth0.hwaddr: 00:16:3e:61:88:f6
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[]'
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: b1b00fac-ef9b-4cbb-bbaa-785fc9a21fbc
devices:
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

sudo ls /var/lib/lxd/storage-pools/default/containers/c1/rootfs -lt
total 60
drwxrwxrwt  7 root root 4096 Dec  5 12:26 tmp
drwxr-xr-x  2 root root 4096 Dec  4 07:48 dev
drwxr-xr-x 62 root root 4096 Dec  4 07:45 etc
drwxr-xr-x  2 root root 4096 Dec  4 07:45 run
drwxr-xr-x  3 root root 4096 Dec  4 07:45 home
drwxr-xr-x 12 root root 4096 Dec  4 07:44 var
drwxr-xr-x  2 root root 4096 Dec  4 07:43 media
drwxr-xr-x  2 root root 4096 Dec  4 07:43 mnt
drwxr-xr-x  2 root root 4096 Dec  4 07:43 opt
drwx------  2 root root 4096 Dec  4 07:43 root
drwxr-xr-x  2 root root 4096 Dec  4 07:43 srv
drwxr-xr-x 14 root root 4096 Dec  4 07:43 usr
lrwxrwxrwx  1 root root    7 Dec  4 07:43 bin -> usr/bin
lrwxrwxrwx  1 root root    7 Dec  4 07:43 lib -> usr/lib
lrwxrwxrwx  1 root root    9 Dec  4 07:43 lib32 -> usr/lib32
lrwxrwxrwx  1 root root    9 Dec  4 07:43 lib64 -> usr/lib64
lrwxrwxrwx  1 root root   10 Dec  4 07:43 libx32 -> usr/libx32
lrwxrwxrwx  1 root root    8 Dec  4 07:43 sbin -> usr/sbin
drwxr-xr-x  2 root root 4096 Apr 18  2022 boot
drwxr-xr-x  2 root root 4096 Apr 18  2022 proc
drwxr-xr-x  2 root root 4096 Apr 18  2022 sys

Reinstate shifting using idmapped mounts:

lxc stop c1

lxc config set c1 volatile.idmap.next='[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'

lxc start c1
lxc config show c1
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu jammy amd64 (20221204_07:42)
  image.os: Ubuntu
  image.release: jammy
  image.serial: "20221204_07:42"
  image.type: squashfs
  image.variant: default
  volatile.base_image: 1015dfba73874bc0813b11bff37dc0ef6d7ef49a2a15721fdf5dfad7170150c8
  volatile.cloud-init.instance-id: d8c45281-efb2-4f90-a723-e50a88319fd7
  volatile.eth0.host_name: veth2b464be5
  volatile.eth0.hwaddr: 00:16:3e:61:88:f6
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: b1b00fac-ef9b-4cbb-bbaa-785fc9a21fbc
devices:
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

sudo ls /var/lib/lxd/storage-pools/default/containers/c1/rootfs -lt
total 60
drwxrwxrwt  9 root root 4096 Dec  5 12:28 tmp
drwxr-xr-x  2 root root 4096 Dec  4 07:48 dev
drwxr-xr-x 62 root root 4096 Dec  4 07:45 etc
drwxr-xr-x  2 root root 4096 Dec  4 07:45 run
drwxr-xr-x  3 root root 4096 Dec  4 07:45 home
drwxr-xr-x 12 root root 4096 Dec  4 07:44 var
drwxr-xr-x  2 root root 4096 Dec  4 07:43 media
drwxr-xr-x  2 root root 4096 Dec  4 07:43 mnt
drwxr-xr-x  2 root root 4096 Dec  4 07:43 opt
drwx------  2 root root 4096 Dec  4 07:43 root
drwxr-xr-x  2 root root 4096 Dec  4 07:43 srv
drwxr-xr-x 14 root root 4096 Dec  4 07:43 usr
lrwxrwxrwx  1 root root    7 Dec  4 07:43 bin -> usr/bin
lrwxrwxrwx  1 root root    7 Dec  4 07:43 lib -> usr/lib
lrwxrwxrwx  1 root root    9 Dec  4 07:43 lib32 -> usr/lib32
lrwxrwxrwx  1 root root    9 Dec  4 07:43 lib64 -> usr/lib64
lrwxrwxrwx  1 root root   10 Dec  4 07:43 libx32 -> usr/libx32
lrwxrwxrwx  1 root root    8 Dec  4 07:43 sbin -> usr/sbin
drwxr-xr-x  2 root root 4096 Apr 18  2022 boot
drwxr-xr-x  2 root root 4096 Apr 18  2022 proc
drwxr-xr-x  2 root root 4096 Apr 18  2022 sys

But the container’s processes are now running as the unprivileged user and not host’s root.