No remapping of container after restore

it does not show the IDs, only the id/group names, the dump I provided to you did (it was for prosody.cfg.lua, the full name was cut by hexedit).

Ref: the tar version should be exactly the same, since AFAIK tar is part of the snap package (4.23 in both cases).

drwxr-xr-x root/root         0 2022-01-18 16:17 backup/container/rootfs/etc/prosody
-rw-r--r-- root/root       292 2020-01-20 13:58 backup/container/rootfs/etc/prosody/README
drwxr-x--- root/lpadmin      0 2022-01-18 10:17 backup/container/rootfs/etc/prosody/certs
lrwxrwxrwx root/root         0 2022-01-18 10:17 backup/container/rootfs/etc/prosody/certs/auth.reunions.example.com.crt -> /var/lib/prosody/auth.reunions.example.com.crt
lrwxrwxrwx root/root         0 2022-01-18 10:17 backup/container/rootfs/etc/prosody/certs/auth.reunions.example.com.key -> /var/lib/prosody/auth.reunions.example.com.key
lrwxrwxrwx root/root         0 2022-01-12 00:09 backup/container/rootfs/etc/prosody/certs/localhost.crt -> /etc/ssl/certs/ssl-cert-snakeoil.pem
lrwxrwxrwx root/root         0 2022-01-12 00:09 backup/container/rootfs/etc/prosody/certs/localhost.key -> /etc/ssl/private/ssl-cert-snakeoil.key
lrwxrwxrwx root/root         0 2022-01-18 10:17 backup/container/rootfs/etc/prosody/certs/reunions.example.com.crt -> /var/lib/prosody/reunions.example.com.crt
lrwxrwxrwx root/root         0 2022-01-18 10:17 backup/container/rootfs/etc/prosody/certs/reunions.example.com.key -> /var/lib/prosody/reunions.example.com.key
drwxr-xr-- root/lpadmin      0 2022-02-02 20:01 backup/container/rootfs/etc/prosody/conf.avail
-rw-r--r-- root/root      1039 2020-01-20 13:58 backup/container/rootfs/etc/prosody/conf.avail/example.com.cfg.lua
-rw-r--r-- root/root       114 2020-01-20 13:58 backup/container/rootfs/etc/prosody/conf.avail/localhost.cfg.lua
-rw-r--r-- root/root      4446 2022-02-02 20:01 backup/container/rootfs/etc/prosody/conf.avail/reunions.example.com.cfg.lua
drwxr-xr-- root/lpadmin      0 2022-01-18 16:27 backup/container/rootfs/etc/prosody/conf.d
lrwxrwxrwx root/root         0 2022-01-18 10:17 backup/container/rootfs/etc/prosody/conf.d/localhost.cfg.lua -> ../conf.avail/localhost.cfg.lua
lrwxrwxrwx root/root         0 2022-01-18 10:17 backup/container/rootfs/etc/prosody/conf.d/reunions.example.com.cfg.lua -> /etc/prosody/conf.avail/reunions.example.com.cfg.lua
-rw-r--r-- root/root       353 2020-01-20 13:58 backup/container/rootfs/etc/prosody/migrator.cfg.lua
-rw-r----- root/lpadmin   9798 2020-01-20 13:58 backup/container/rootfs/etc/prosody/prosody.cfg.lua

What is group ID 122 on your new LXD host (is it lpadmin)?

Thanks, and the same output but with the --numeric-owner flag added to tar please?

yes, this is on the new system and the /mnt/diskroot is the old system:

cat /etc/group | grep lpadmin
lpadmin:x:122:gerard
(py38) gerard@j5005:~$ cat /mnt/diskroot/etc/group | grep lpadmin
lpadmin:x:120:gerard

-rwxr-xr-x 0/0            2462 2020-01-20 13:58 backup/container/rootfs/etc/init.d/prosody
-rw-r--r-- 0/0             267 2020-01-20 13:58 backup/container/rootfs/etc/logrotate.d/prosody
drwxr-xr-x 0/0               0 2022-01-18 16:17 backup/container/rootfs/etc/prosody
-rw-r--r-- 0/0             292 2020-01-20 13:58 backup/container/rootfs/etc/prosody/README
drwxr-x--- 0/120             0 2022-01-18 10:17 backup/container/rootfs/etc/prosody/certs
lrwxrwxrwx 0/0               0 2022-01-18 10:17 backup/container/rootfs/etc/prosody/certs/auth.reunions.example.com.crt -> /var/lib/prosody/auth.reunions.example.com.crt
lrwxrwxrwx 0/0               0 2022-01-18 10:17 backup/container/rootfs/etc/prosody/certs/auth.reunions.example.com.key -> /var/lib/prosody/auth.reunions.example.com.key
lrwxrwxrwx 0/0               0 2022-01-12 00:09 backup/container/rootfs/etc/prosody/certs/localhost.crt -> /etc/ssl/certs/ssl-cert-snakeoil.pem
lrwxrwxrwx 0/0               0 2022-01-12 00:09 backup/container/rootfs/etc/prosody/certs/localhost.key -> /etc/ssl/private/ssl-cert-snakeoil.key
lrwxrwxrwx 0/0               0 2022-01-18 10:17 backup/container/rootfs/etc/prosody/certs/reunions.example.com.crt -> /var/lib/prosody/reunions.example.com.crt
lrwxrwxrwx 0/0               0 2022-01-18 10:17 backup/container/rootfs/etc/prosody/certs/reunions.example.com.key -> /var/lib/prosody/reunions.example.com.key
drwxr-xr-- 0/120             0 2022-02-02 20:01 backup/container/rootfs/etc/prosody/conf.avail
-rw-r--r-- 0/0            1039 2020-01-20 13:58 backup/container/rootfs/etc/prosody/conf.avail/example.com.cfg.lua
-rw-r--r-- 0/0             114 2020-01-20 13:58 backup/container/rootfs/etc/prosody/conf.avail/localhost.cfg.lua
-rw-r--r-- 0/0            4446 2022-02-02 20:01 backup/container/rootfs/etc/prosody/conf.avail/reunions.example.com.cfg.lua
drwxr-xr-- 0/120             0 2022-01-18 16:27 backup/container/rootfs/etc/prosody/conf.d
lrwxrwxrwx 0/0               0 2022-01-18 10:17 backup/container/rootfs/etc/prosody/conf.d/localhost.cfg.lua -> ../conf.avail/localhost.cfg.lua
lrwxrwxrwx 0/0               0 2022-01-18 10:17 backup/container/rootfs/etc/prosody/conf.d/reunions.example.com.cfg.lua -> /etc/prosody/conf.avail/reunions.example.com.cfg.lua
-rw-r--r-- 0/0             353 2020-01-20 13:58 backup/container/rootfs/etc/prosody/migrator.cfg.lua

on the old system I have created 2 containers, one with the original option (shiftfs enabled) and one after having disabled shiftfs (as it is the case on the new system and the test system I have setup on another 20.04 I have), and importing them lead to no remapping on the new 22.04, while on 20.04 the unmapped one remaps. I am inclined to think that this failure to remap a file system that needs it is the only problem. This (origin shifted, target unshifted) is working find with origin=20.04 and target=20.04, not with origin=20.04 and target=22.04.

Also, I have tried to import in a dir storage: no rmapping either, and looking in the container the group ID are sometimes wrong too, the result is identical to the btrfs storageā€¦

Finally, I have tried to enable shiftfs on the new system. I expected it to work fine, there was no remapping phase (of course), but inside the container the group ID in my test directory were still wrong.
Now whatā€™s more interesting is that I tried then to reset shiftfs to false and launched a new focal container, and it did not remap. As the new system is well, new, I had not yet done that.
Oh well I had this idea to start a new VM in my KVM setup and of course itā€™s deprecated now. That will be all for today then :-/

Yes I donā€™t think this is a shiftfs or idmapped mount problem.
I think this is a tarball unpack problem.

Iā€™m looking for a reproducer. However we are already using --numeric-owner to the tar unpack command (lxd/backup: Call tar with --numeric-owner Ā· lxc/lxd@0401bc9 Ā· GitHub).

Iā€™m afraid Iā€™m not following what you mean here. Please define ā€œshiftedā€ in this context.

Iā€™ve not been able to reproduce the issue.
If you could create a container export that exhibits the problem when imported into LXD 4.23 from the snap package running on ubuntu 22.04 and provide me a link to that image, I could try importing it to try and reproduce it.

thanks for your time, but a question: did you try on a desktop or server version ? here is the result of my install party/test with default images from standard Ubuntu (not Kubuntu):

  • VM with 22.04 server: works
  • physical part on same computer as the Kubuntu install, with 22.04 desktop: problem
  • physical part on same computer as the Kubuntu install, with 22.04 server: works

I also upgraded Kubuntu to the same kernel as Ubuntu 22.04 server (and desktop) -5.15.0-22-generic - and the problem stays the same.

So I have regenerated a new container with a problem on a vanilla Ubuntu 20.04 workstation.

Procedure:

snap set lxd shifts.enable=true , reload lxd

lxc launch ubuntu:focal focaltest
lxc exec focaltest bash
ā†’ apt update; apt install prosody
In the container, the files created in /etc/prosody group-owned by prosody have an ID of 120 (because Prosody group has this ID obviously).
In the host, the group with an ID of 120 is lpadmin.
exit container, lxc export focaltest focaltest.tar.gz

ā†’ copy of the backup to an Ubuntu 22.04 workstation with default install (shiftfs.enable not set)
on the Ubuntu 22.04, lxc import focaltest.tar.gz
lxc exec focaltest bash
ls /etc/prosody -lart ā†’ some files have a numerical group ID (122 donā€™t exist in /etc/group of the container)
So in the tar.gz, the group name for the files is lpadmin, and as on Ubuntu 22.04 the group ID of lpadmin is 122, it takes precedence and the group ID of the files belonging to the Prosody group (120 in the container).
My guess is that it works on Ubuntu server because the lpadmin group does not exist.
The only relation with shiftfs is that when itā€™s set to false on the backuped system, the ID are over 100000 and canā€™t be mixed up by tar with local ID when exporting, so the user/group names are not set in the tar file.

I think that there could be many ways of reproducing it, but I have sent you a link for the example by DM.

Thanks for this, Iā€™ve go a Jammy desktop so Iā€™ll test on there.

Iā€™ve not been able to reproduce it.

Can you try something for me. Can you try importing it onto a Jammy system onto a dir based pool and then before starting it, check what the output of:

ls /var/snap/lxd/common/lxd/storage-pools/<pool>/containers/focaltest/rootfs/etc/prosody/ -lan

On mine when I import it it looks like this after import before starting, which lines up with the 120 group ID for prosody in the container.

ls /var/snap/lxd/common/lxd/storage-pools/default/containers/focaltest/rootfs/etc/prosody/ -lan
total 40
drwxr-xr-x  5 0   0 4096 Mar  9 16:53 .
drwxr-xr-x 93 0   0 4096 Mar  9 16:53 ..
drwxr-x---  2 0 120 4096 Mar  9 16:53 certs
drwxr-xr--  2 0 120 4096 Mar  9 16:53 conf.avail
drwxr-xr--  2 0 120 4096 Mar  9 16:53 conf.d
-rw-r--r--  1 0   0  353 Jan 20  2020 migrator.cfg.lua
-rw-r-----  1 0 120 9798 Jan 20  2020 prosody.cfg.lua
-rw-r--r--  1 0   0  292 Jan 20  2020 README

Iā€™m running this on the edge snap so would be interesting to see if you still get the same issue on that.

Oh I just recreated it using the native build on my jammy system rather than the snap (so its not running inside the snap mount namespace).

Fixed it, should make it into LXD 4.24 which is released shortly.

oh well.

sudo nsenter -t $(pgrep daemon.start) -m --  ls /var/snap/lxd/common/lxd/storage-pools/pool2/containers/focaltest/rootfs/etc/prosody -lan
total 40
drwxr-xr-x  5 0   0 4096 Mar  8 22:02 .
drwxr-xr-x 93 0   0 4096 Mar  8 22:02 ..
-rw-r--r--  1 0   0  292 Jan 20  2020 README
drwxr-x---  2 0 122 4096 Mar  8 22:02 certs
drwxr-xr--  2 0 122 4096 Mar  8 22:02 conf.avail
drwxr-xr--  2 0 122 4096 Mar  8 22:02 conf.d
-rw-r--r--  1 0   0  353 Jan 20  2020 migrator.cfg.lua
-rw-r-----  1 0 122 9798 Jan 20  2020 prosody.cfg.lua

this is on Kubuntu 22.04.
which lxd
/snap/bin/lxd

but if I have to use nsenter -t $(pgrep daemon.start) there is no doubt that itā€™s running on snap does it ?

oh good grief. I deleted the container and recreated it after switching to LXD edge.

sudo nsenter -t $(pgrep daemon.start) -m --  ls /var/snap/lxd/common/lxd/storage-pools/pool2/containers/focaltest/rootfs/etc/prosody -lan
total 40
drwxr-xr-x  5 0   0 4096 Mar  8 22:02 .
drwxr-xr-x 93 0   0 4096 Mar  8 22:02 ..
-rw-r--r--  1 0   0  292 Jan 20  2020 README
drwxr-x---  2 0 120 4096 Mar  8 22:02 certs
drwxr-xr--  2 0 120 4096 Mar  8 22:02 conf.avail
drwxr-xr--  2 0 120 4096 Mar  8 22:02 conf.d
-rw-r--r--  1 0   0  353 Jan 20  2020 migrator.cfg.lua
-rw-r-----  1 0 120 9798 Jan 20  2020 prosody.cfg.lua

What change could have fixed it ? Your change is only 3 minutes oldā€¦

Thereā€™s been a bunch of work on hardening calls to external unpacker commands (such as tar), some of them landed in LXD 4.23, some were subsequently cherry-picked to fix regressions:

This one is interesting:

There have also been some more which havenā€™t yet been cherry-picked:

Although the issue certainly still exists in edge, albeit it appears to be more tricky to trigger when run inside the snap mount namespace (perhaps because it uses a different /etc/passwd and /etc/group from the snap).

All right, thanks for your time.
My containers import well under LXD edge.
Iā€™ll try to follow the 4.24 release and test again with it if it has your last fix, and report back if I see something worth of interest. Bye.

1 Like