Can't mount a volume shared between two containers due to idmap mismatch

Hi

A bit of context: I have an container that’s been running for a while and stated life on an earlier version of LXD (not sure which one, but the container image was 17.10, and is now upgraded to 20.04).

I want to share a disk (custom) volume between two containers. The above one, and a new one created on LXD 4.14:

~$ lxc version
Client version: 4.14
Server version: 4.14

The volume is attached to the new container, and then I wanted to add it to the older container:

$ lxc config device add syncthing paperless disk pool=default source=paperless-ng-syncthing path=/srv/paperless-ng
Error: Failed to start device "paperless": Failed shifting storage volume "paperless-ng-syncthing" of type "custom" on storage pool "default": Idmaps of container and storage volume are not identical

So I checked the idmaps of the two containers. First the old one:

  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'

And now the new one:

  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'

The difference seems to be that the old one has a Maprange of 65536 and the new one is 1000000000.

Is it possible to update the old one to match the new one, or is there another trick to get the volume attached to old container?

Thanks.

For cases like this, this trick usually works:

  • lxc config set NAME security.privileged true
  • lxc start NAME
  • lxc stop NAME
  • lxc config unset NAME security.privileged
  • lxc start NAME

This effectively unshifts everything on disk then shiftfs everything to what’s currently used.

That did the trick! Thanks very much. The Idmaps now match. I will try the container mount again. But first I have to sort out my old facl maps that mapped a folder into the container from the host!