UID mapping not working on focal container

Hi

I have a shared directory that is mounted in all of my containers. It works well in the Bionic containers but us unwriteable in the focal containers because the uid : gid mappings aren’t working properly.
User ID’s are the same on both containers, I only use standard user www-data anyway.
The focal container is built from the Ubuntu minimal images if that makes any difference.
Any idea what has changed between releases to cause this issue?

Thanks for reading.

I’ve tested on a non minimal image. Same issue. I thought it may be related to apparmor.

Unlikely to be apparmor.
So you confirmed that the uid/gid of the www-data user is the same in both containers?

Can you show lxc config show --expanded NAME for both of them too?

Hi, yes, the uid and gid of www-data are the same on both.

Here is the config. I see the difference, not sure what to do about it though. They use the same profile on the same standalone server.

 lxc config show --expanded sandbox1:plugin-dev-master-21
architecture: x86_64
config:
  volatile.base_image: bbdc1c11a4cd3a5ee03e546debf07753fb2d5e3ae5fa5255b224993fab6cc85c
  volatile.eth0.host_name: vethc1f45702
  volatile.eth0.hwaddr: 00:16:3e:33:d0:54
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: RUNNING
devices:
  connectshared:
    path: /mnt/connectshared/
    source: /mnt/connectshared/
    type: disk
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

architecture: x86_64
config:
  image.architecture: x86_64
  image.description: Ubuntu 20.04 LTS minimal (20200729)
  image.os: ubuntu
  image.release: focal
  volatile.base_image: 0e73a34ea095b7efeda3408b0e01a0f1f75e55f54e45fce8542284de21b4a120
  volatile.eth0.host_name: vethbac51800
  volatile.eth0.hwaddr: 00:16:3e:b3:81:73
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
devices:
  connectshared:
    path: /mnt/connectshared/
    source: /mnt/connectshared/
    type: disk
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

Can you show state /mnt/connectshared/ on the host and cat /proc/self/uid_map from inside both containers?

$ lxc exec sandbox1:plugin-dev-master-21 cat /proc/self/uid_map
         0     100000      65536
$ lxc exec sandbox1:apirouter-wayne cat /proc/self/uid_map
         0    1000000 1000000000

What do you mean by show state?

Oh, I see, that makes sense now.

So easiest is probably to do:

  • lxc config set sandbox1:plugin-dev-master-21 security.privileged true
  • lxc restart sandbox1:plugin-dev-master-21
  • lxc config unset sandbox1:plugin-dev-master-21 security.privileged
  • lxc restart sandbox1:plugin-dev-master-21

That effectively will move it to be a privileged container temporarily, then shift it back to unprivileged which should then use the newer, larger range matching that of apirouter-wayne at which point both containers will use the same uid/gid range and so can share data properly.

Note that assuming that sandbox1:plugin-dev-master-21 is your currently working container, you will have to fix the ownership of /mnt/connectshared on the host to match the new owner in the container. In this case, I would expect something like 100033 to become 1000033.

Thanks. I’ll have to apply that to my source image I’m building all my containers with. Any idea why focal has been so different to bionic?

Image shouldn’t matter, when you launch from an image, LXD will pick the new map.
That’s unless your “image” is actually a container that you “lxc copy” in which case, it indeed needs this trick applied to it.

The main different from bionic to focal is the switch to the snap which unties LXD from distro-specific policies and in this case unlocks a much larger (and slightly offset) uid/gid map for containers to use.

Thanks. Is there any way to go the opposite way and use the smaller range on the new containers? There is a HUGE risk and a lot of downtime associated with changing all my current containers to use the larger, new range.

Yeah, it’s possible though a bit hackish :slight_smile:

Can you show lxc config show NAME | grep volatile on both the existing and new container?

LXD will always allocate new containers starting with 1000000 instead of 100000 but we can force a remap to a lower range on a per container basis.

~$ lxc config show sandbox1:plugin-dev-master-21 | grep volatile
volatile.base_image: bbdc1c11a4cd3a5ee03e546debf07753fb2d5e3ae5fa5255b224993fab6cc85c
volatile.eth0.host_name: vethc1f45702
volatile.eth0.hwaddr: 00:16:3e:33:d0:54
volatile.idmap.base: “0”
volatile.idmap.current: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:100000,“Nsid”:0,“Maprange”:65536},{“Isuid”:false,“Isgid”:true,“Hostid”:100000,“Nsid”:0,“Maprange”:65536}]’
volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:100000,“Nsid”:0,“Maprange”:65536},{“Isuid”:false,“Isgid”:true,“Hostid”:100000,“Nsid”:0,“Maprange”:65536}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:100000,“Nsid”:0,“Maprange”:65536},{“Isuid”:false,“Isgid”:true,“Hostid”:100000,“Nsid”:0,“Maprange”:65536}]’
volatile.last_state.power: RUNNING
~$ lxc config show sandbox1:apirouter-wayne | grep volatile
volatile.base_image: 0e73a34ea095b7efeda3408b0e01a0f1f75e55f54e45fce8542284de21b4a120
volatile.eth0.host_name: vethbac51800
volatile.eth0.hwaddr: 00:16:3e:b3:81:73
volatile.idmap.base: “0”
volatile.idmap.current: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.power: RUNNING

Try:

  • lxc stop sandbox1:apirouter-wayne
  • lxc config set sandbox1:apirouter-wayne volatile.idmap.next ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:100000,“Nsid”:0,“Maprange”:65536},{“Isuid”:false,“Isgid”:true,“Hostid”:100000,“Nsid”:0,“Maprange”:65536}]’
  • lxc start sandbox1:apirouter-wayne

Thanks

Starting gives this error.

Error: Common start logic: Failed to change ACLs on /var/snap/lxd/common/lxd/storage-pools/default/containers/apirouter-wayne/rootfs/var/log/journal

I appreciate all the help. We’re going to work with your original advice. After my initial panic, it seems that not that many of our containers use the shared storage so we can probably make the change if we can get the correct mapping where necessary.

Ok, /var/log/journal has been an occasional pain due to attributes piling up on it making shifting fail due to the filesystem being unable to store the new value…

I’d strongly recommend you delete that path rm -rf /var/log/journal inside the container before attempting any remapping, especially if dealing with an older container (doesn’t seem to be the case here though).

/var/log/journal holds the historical content of the systemd journal (journalctl).
Deleting it should only make you lose some historical binary log data, journald should recreate the path on next boot and start logging in there again.

That worked! Thanks :slight_smile: