on the old system I have created 2 containers, one with the original option (shiftfs enabled) and one after having disabled shiftfs (as it is the case on the new system and the test system I have setup on another 20.04 I have), and importing them lead to no remapping on the new 22.04, while on 20.04 the unmapped one remaps. I am inclined to think that this failure to remap a file system that needs it is the only problem. This (origin shifted, target unshifted) is working find with origin=20.04 and target=20.04, not with origin=20.04 and target=22.04.
Also, I have tried to import in a dir storage: no rmapping either, and looking in the container the group ID are sometimes wrong too, the result is identical to the btrfs storageā¦
Finally, I have tried to enable shiftfs on the new system. I expected it to work fine, there was no remapping phase (of course), but inside the container the group ID in my test directory were still wrong.
Now whatās more interesting is that I tried then to reset shiftfs to false and launched a new focal container, and it did not remap. As the new system is well, new, I had not yet done that.
Oh well I had this idea to start a new VM in my KVM setup and of course itās deprecated now. That will be all for today then :-/
Iāve not been able to reproduce the issue.
If you could create a container export that exhibits the problem when imported into LXD 4.23 from the snap package running on ubuntu 22.04 and provide me a link to that image, I could try importing it to try and reproduce it.
thanks for your time, but a question: did you try on a desktop or server version ? here is the result of my install party/test with default images from standard Ubuntu (not Kubuntu):
VM with 22.04 server: works
physical part on same computer as the Kubuntu install, with 22.04 desktop: problem
physical part on same computer as the Kubuntu install, with 22.04 server: works
I also upgraded Kubuntu to the same kernel as Ubuntu 22.04 server (and desktop) -5.15.0-22-generic - and the problem stays the same.
So I have regenerated a new container with a problem on a vanilla Ubuntu 20.04 workstation.
Procedure:
snap set lxd shifts.enable=true , reload lxd
lxc launch ubuntu:focal focaltest
lxc exec focaltest bash
ā apt update; apt install prosody
In the container, the files created in /etc/prosody group-owned by prosody have an ID of 120 (because Prosody group has this ID obviously).
In the host, the group with an ID of 120 is lpadmin.
exit container, lxc export focaltest focaltest.tar.gz
ā copy of the backup to an Ubuntu 22.04 workstation with default install (shiftfs.enable not set)
on the Ubuntu 22.04, lxc import focaltest.tar.gz
lxc exec focaltest bash
ls /etc/prosody -lart ā some files have a numerical group ID (122 donāt exist in /etc/group of the container)
So in the tar.gz, the group name for the files is lpadmin, and as on Ubuntu 22.04 the group ID of lpadmin is 122, it takes precedence and the group ID of the files belonging to the Prosody group (120 in the container).
My guess is that it works on Ubuntu server because the lpadmin group does not exist.
The only relation with shiftfs is that when itās set to false on the backuped system, the ID are over 100000 and canāt be mixed up by tar with local ID when exporting, so the user/group names are not set in the tar file.
I think that there could be many ways of reproducing it, but I have sent you a link for the example by DM.
Can you try something for me. Can you try importing it onto a Jammy system onto a dir based pool and then before starting it, check what the output of:
ls /var/snap/lxd/common/lxd/storage-pools/<pool>/containers/focaltest/rootfs/etc/prosody/ -lan
On mine when I import it it looks like this after import before starting, which lines up with the 120 group ID for prosody in the container.
ls /var/snap/lxd/common/lxd/storage-pools/default/containers/focaltest/rootfs/etc/prosody/ -lan
total 40
drwxr-xr-x 5 0 0 4096 Mar 9 16:53 .
drwxr-xr-x 93 0 0 4096 Mar 9 16:53 ..
drwxr-x--- 2 0 120 4096 Mar 9 16:53 certs
drwxr-xr-- 2 0 120 4096 Mar 9 16:53 conf.avail
drwxr-xr-- 2 0 120 4096 Mar 9 16:53 conf.d
-rw-r--r-- 1 0 0 353 Jan 20 2020 migrator.cfg.lua
-rw-r----- 1 0 120 9798 Jan 20 2020 prosody.cfg.lua
-rw-r--r-- 1 0 0 292 Jan 20 2020 README
Iām running this on the edge snap so would be interesting to see if you still get the same issue on that.
Thereās been a bunch of work on hardening calls to external unpacker commands (such as tar), some of them landed in LXD 4.23, some were subsequently cherry-picked to fix regressions:
This one is interesting:
There have also been some more which havenāt yet been cherry-picked:
Although the issue certainly still exists in edge, albeit it appears to be more tricky to trigger when run inside the snap mount namespace (perhaps because it uses a different /etc/passwd and /etc/group from the snap).
All right, thanks for your time.
My containers import well under LXD edge.
Iāll try to follow the 4.24 release and test again with it if it has your last fix, and report back if I see something worth of interest. Bye.