No remapping of container after restore

Hello

Migrating a computer from an OS to another, both with LXD, I went the easy route of exporting them all and reimporting. lxc import did go well, a bit faster than usual, however when trying to use the first container, updating the system led to permissions errors. I discovered with listing the rootfs from the host that indeed the remapping step had been skipped, so the files under /etc were shown as belonging to the UID 0 for example.

AFAIK this is normal before starting a restored container, not after.

Is is a known problem with new options for export/restore ? Maybe I missed something since last time I followed events on this forum.

Version: lxd 4.23 22525 latest/stable canonical
from snap.

Exports were done with containers stopped, without any snapshots. Backend are Btrfs partition (origin and target) if it matters. There is no cluster involved.

It would be good to see lxc config show --expanded prior to starting the imported container and see the on-disk permissions to compare too.

lxc config show --expanded
Error: --expanded cannot be used with a server

err, uh ? I have never seen this message and I have no clue on what it could mean.

not sure what you mean here. If it’s the rootfs, here is a (partial) listing before starting a freshly imported container:

sudo nsenter -t $(pgrep daemon.start) -m -- ls /var/snap/lxd/common/lxd/storage-pools/default/containers/jitsistable/rootfs/etc/ -larn
total 468
-rw-r--r-- 1 0   0   477 Oct  7  2019 zsh_command_not_found
drwxr-xr-x 1 0   0    96 Jan  4 22:39 xdg
-rw-r--r-- 1 0   0   642 Sep 24  2019 xattr.conf
-rw-r--r-- 1 0   0  4942 Nov 12 18:09 wgetrc
lrwxrwxrwx 1 0   0    23 Jan  4 22:40 vtrgb -> /etc/alternatives/vtrgb
drwxr-xr-x 1 0   0   280 Jan 11 23:14 vmware-tools
drwxr-xr-x 1 0   0    30 Jan 22 08:08 vim
drwxr-xr-x 1 0   0     0 Dec  3 01:19 update-notifier
drwxr-xr-x 1 0   0   440 Jan 18 09:03 update-motd.d
drwxr-xr-x 1 0   0    92 Jan  4 22:41 update-manager
drwxr-xr-x 1 0   0   246 Jan  4 22:41 ufw
drwxr-xr-x 1 0   0    24 Jan  4 22:41 udisks2
drwxr-xr-x 1 0   0    44 Jan 14 09:43 udev
-rw-r--r-- 1 0   0  1260 Dec 14  2018 ucf.conf
drwxr-xr-x 1 0   0    54 Jan 18 09:03 ubuntu-advantage
drwxr-xr-x 1 0   0    38 Jan  4 22:41 tmpfiles.d
-rw-r--r-- 1 0   0    13 Jan 11 23:14 timezone
drwxr-xr-x 1 0   0    12 Jan  4 22:39 terminfo
drwxr-xr-x 1 0   0   244 Jan 14 09:43 systemd
drwxr-xr-x 1 0   0   474 Feb  2 20:01 sysctl.d
-rw-r--r-- 1 0   0  2351 Feb 13  2020 sysctl.conf
drwxr-x--- 1 0   0    50 Jan 11 23:13 sudoers.d
-r--r----- 1 0   0   755 Feb  3  2020 sudoers
-rw-r--r-- 1 0   0    20 Jan 11 23:13 subuid-
-rw-r--r-- 1 0   0    36 Jan 11 23:14 subuid
-rw-r--r-- 1 0   0    20 Jan 11 23:13 subgid-
-rw-r--r-- 1 0   0    36 Jan 11 23:14 subgid
drwxr-xr-x 1 0   0    46 Jan  4 22:40 ssl

after starting the imported container

sudo nsenter -t $(pgrep daemon.start) -m -- ls /var/snap/lxd/common/lxd/storage-pools/default/containers/jitsistable/rootfs/etc/ -larn
total 468
-rw-r--r-- 1 0   0   477 Oct  7  2019 zsh_command_not_found
drwxr-xr-x 1 0   0    96 Jan  4 22:39 xdg
-rw-r--r-- 1 0   0   642 Sep 24  2019 xattr.conf
-rw-r--r-- 1 0   0  4942 Nov 12 18:09 wgetrc
lrwxrwxrwx 1 0   0    23 Jan  4 22:40 vtrgb -> /etc/alternatives/vtrgb
drwxr-xr-x 1 0   0   280 Jan 11 23:14 vmware-tools
drwxr-xr-x 1 0   0    30 Jan 22 08:08 vim
drwxr-xr-x 1 0   0     0 Dec  3 01:19 update-notifier
drwxr-xr-x 1 0   0   440 Jan 18 09:03 update-motd.d
drwxr-xr-x 1 0   0    92 Jan  4 22:41 update-manager
drwxr-xr-x 1 0   0   246 Jan  4 22:41 ufw
drwxr-xr-x 1 0   0    24 Jan  4 22:41 udisks2
drwxr-xr-x 1 0   0    44 Jan 14 09:43 udev
-rw-r--r-- 1 0   0  1260 Dec 14  2018 ucf.conf
drwxr-xr-x 1 0   0    54 Jan 18 09:03 ubuntu-advantage
drwxr-xr-x 1 0   0    38 Jan  4 22:41 tmpfiles.d
-rw-r--r-- 1 0   0    13 Jan 11 23:14 timezone
drwxr-xr-x 1 0   0    12 Jan  4 22:39 terminfo
drwxr-xr-x 1 0   0   244 Jan 14 09:43 systemd
drwxr-xr-x 1 0   0   474 Feb  2 20:01 sysctl.d
-rw-r--r-- 1 0   0  2351 Feb 13  2020 sysctl.conf
drwxr-x--- 1 0   0    50 Jan 11 23:13 sudoers.d

and the problem is present inside the container of course.

lxc config show --expanded jitsistable

Sorry, I was a bit tired yesterday. I should have realized that in this case it’s expected that a LXD server is to be passed as parameter. Use of the same syntax for the LXD global configuration and container configuration can get a bit confusing.

lxc config show --expanded jitsistable
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20220104)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20220104"
  image.type: squashfs
  image.version: "20.04"
  volatile.base_image: 5e94999280de957497662d414bcd84766b55903c42af5bff9ea39a3cacabad12
  volatile.eth0.hwaddr: 00:16:3e:41:a2:83
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: STOPPED
  volatile.uuid: fb876dda-7290-4101-8380-75f9b74ac98b
devices:
  eth0:
    nictype: macvlan
    parent: eno1
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
- nonic
stateful: false
description: ""

For comparison, here is one I imported by first copying to another LXD server, and copying it back to the new OS (in this case remapping does its thing) - I post here not because I want a workaround, I can do without it because the LXD resources are enough to solve it, however I suspect I am getting into a unexpected result in a corner case.

architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20211108)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20211108"
  image.type: squashfs
  image.version: "20.04"
  volatile.apply_template: copy
  volatile.base_image: bd2ffb937c95633a28091e6efc42d6c7b1474ad8eea80d6ed8df800e44c6bfdd
  volatile.eth0.hwaddr: 00:16:3e:17:28:d0
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.uuid: 2e304b47-eee0-426f-b08e-2395eb068d4c
devices:
  eth0:
    nictype: macvlan
    parent: eno1
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
- nonic
stateful: false
description: ""

Difference seems obvious: bad container has not the ‘volatile.apply_template: copy’ line. However from memory when I created the 2 containers I did not follow a different procedure (I have a script to create new containers) so it does not seem likely there were different on the original OS. I can still boot it if you think it would be helpful to make sure.

Had the exported container ever been started before being exported?

Yes.

So it should have been remapped on start up and so the export should have contained the remapping of the files. Can you check that the export is mapped, using tar -ztvf export.tar.gz and see if any of them have a mapping to a non-root UID/GID.

I just checked doing this procedure (after starting LXD with LXD_SHIFTFS_DISABLE=true environment var, which I also had to modify locally to ensure that idmapped mounts were disabled also).

lxc init images:ubuntu/focal c1 # No remapping takes place and config key `volatile.apply_template: create` exists
lxc export c1 /home/user/c1.tar.gz # Tarball contains unmapped files
lxc import  /home/user/c1.tar.gz # Config key `volatile.apply_template: create` still present
lxc start c1 # Remapping takes place.

Can you show the output of lxc info on your old and new host?

@tomp

thanks for taking the time for looking at that; however I’m feeling bad now because I should have been straightforward from the beginning. My new system is working so well and I’m so satisfied with it that I had forgotten it’s still unsupported software for a month; you see it’s Kubuntu 22.04.

So I have installed LXD on another computer with Ubuntu 22.04 LTS workstation (graphical) like my old OS and the container imports perfectly. Argh. My bad. Sorry.

Given that, I have not posted the information you asked for because well, the slight detail of the unsupported kernel is probably much more important. I have it if you want it though.

I wonder if its a problem with the use of idmapped mounts on the new system and not the old system.

Does the tarball show the filed as mapped inside the export?

Well, many of these files are mapped to non-root; that’s the one that are not root in the container.
I see the relation between the group numbers on the target system and the one in the containers, that is, if in the container a file has a group whose number is 120 (let’s say video) and on the target system the group numbered 120 is lpadmin, I see the file listed as having a group of lpadmin with tar -ztvf. Is that what you want to know ?

I would like to know if the tarball files have the same IDs as they appear inside the container (i.e unshifted) or whether they appear as shifted inside the tarball.

Yes, if the file appears as having a group of lpadmin in the tar -ztvf and on the same system the GID of lpadmin is 120, I’d say that’s a proof that they are not shifted. Some appear in a numerical form (don’t match a local group) but none appear above 100000.

OK thanks thats useful. Although expected if the system where it was exported from was using shiftfs or idmapped mounts (both of which avoid the need for expensive static file shifting).

This will be shown from lxc info on both the old and new systems.

On the new system I would expect the kernel to support idmapped mounts, however that is also dependent on the storage pool type and/or backing filesystem type.

You say both of these were BTRFS, and idmapped mount support for BTRFS was added in 5.15 kernel, and I believe shiftfs is being removed or at least not used.

Perhaps the problem is that idmapped mount support is not working for some reason.
Getting the output of lxc info would be useful.

Any ideas @stgraber

Here are lxc info for new (Kubuntu 22.04) and old (Ubuntu 20.04)

config:
  core.https_address: '[::]:8443'
  core.trust_password: true
  images.auto_update_interval: "0"
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses:
  - 10.2.0.1:8443
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIICATCCAYegAwIBAgIRAPtKOGdG+a9A6TNLBkqjt3cwCgYIKoZIzj0EAwMwMzEc
    MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzETMBEGA1UEAwwKcm9vdEBqNTAw
    NTAeFw0yMTAyMjYxNzAzNDdaFw0zMTAyMjQxNzAzNDdaMDMxHDAaBgNVBAoTE2xp
    bnV4Y29udGFpbmVycy5vcmcxEzARBgNVBAMMCnJvb3RAajUwMDUwdjAQBgcqhkjO
    PQIBBgUrgQQAIgNiAASp0uKPCTLeLBtUAzVEwSQB8qthi+1Nz2+heI/zCiBbX3EZ
    Hc6v+1FrYRuRRF/Tz7l79tc19H6xBXKKsdaQM9FF75bEF9xHgYa2kfSRZeTsi2wp
    T5ySUFuJiQjZdSsYMCCjXzBdMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggr
    BgEFBQcDATAMBgNVHRMBAf8EAjAAMCgGA1UdEQQhMB+CBWo1MDA1hwR/AAABhxAA
    AAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2gAMGUCMGWg02Bu0VJS4w9IvVn0
    YkOyQClTESwRnhpMKmpJUV6aeQ85LIDqgu8qMQEzLnje8AIxAPrQYPZPBCVcWNOO
    IsJOAB/x3IFTjZektKy1gd7M+3Fz1LlM5OCBBXXx5USxPWC92Q==
    -----END CERTIFICATE-----
  certificate_fingerprint: cf55085c8cf66b6cb857a83bfd6247ad0423d5b507b82c40318db6cd15177645
  driver: lxc | qemu
  driver_version: 4.0.12 | 6.1.1
  firewall: xtables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    shiftfs: "true"
    uevent_injection: "true"
    unpriv_fscaps: "true"
  kernel_version: 5.13.0-30-generic
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Ubuntu
  os_version: "20.04"
  project: default
  server: lxd
  server_clustered: false
  server_name: j5005.klog.local
  server_pid: 1576
  server_version: "4.23"
  storage: btrfs
  storage_version: 5.4.1
  storage_supported_drivers:
  - name: btrfs
    version: 5.4.1
    remote: false
  - name: cephfs
    version: 15.2.14
    remote: true
  - name: dir
    version: "1"
    remote: false
  - name: lvm
    version: 2.03.07(2) (2019-11-30) / 1.02.167 (2019-11-30) / 4.45.0
    remote: false
  - name: zfs
    version: 2.0.6-1ubuntu2
    remote: false
  - name: ceph
    version: 15.2.14
    remote: true

new:

config:
  core.https_address: '[::]:8443'
  core.trust_password: true
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses:
  - 10.2.0.1:8443
  - 192.168.20.12:8443
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIICAjCCAYegAwIBAgIRAPDu1uauV/gr15QaeKoWclcwCgYIKoZIzj0EAwMwMzEc
    MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzETMBEGA1UEAwwKcm9vdEBqNTAw
    NTAeFw0yMjAzMDMyMTM4MTdaFw0zMjAyMjkyMTM4MTdaMDMxHDAaBgNVBAoTE2xp
    bnV4Y29udGFpbmVycy5vcmcxEzARBgNVBAMMCnJvb3RAajUwMDUwdjAQBgcqhkjO
    PQIBBgUrgQQAIgNiAAS3OraswlxCHHkkqxUvLmYQY4RGRxI+Akw3vl5GqKrhd2LR
    n6HNjKveS/DngE0mglIlCphfTZz2tB6M+4iYLytIffpPPnme1bl6EgVTwpNvftT4
    wJPTYWAoySdf+WYtD2ujXzBdMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggr
    BgEFBQcDATAMBgNVHRMBAf8EAjAAMCgGA1UdEQQhMB+CBWo1MDA1hwR/AAABhxAA
    AAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2kAMGYCMQDsc4JqDOzNBFILXWZt
    b0WvhXZ2/qqUmR5dChvtX6ppapmVP0aZ3gyhD44PNiJ/qE8CMQCUxaC6FkiyAqY4
    Urm9wmAWSYb4qSwcxZ/mqMSJWu6wXdTR9pArSDt/trP7rwbnEpg=
    -----END CERTIFICATE-----
  certificate_fingerprint: 1dbc8ddd40d27932592b718a03f7a1850ef413435a538d07ffb42fce1e2f3357
  driver: lxc | qemu
  driver_version: 4.0.12 | 6.1.1
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    shiftfs: "false"
    uevent_injection: "true"
    unpriv_fscaps: "true"
  kernel_version: 5.15.0-18-generic
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Ubuntu
  os_version: "22.04"
  project: default
  server: lxd
  server_clustered: false
  server_name: j5005
  server_pid: 1113
  server_version: "4.23"
  storage: btrfs
  storage_version: 5.4.1
  storage_supported_drivers:
  - name: btrfs
    version: 5.4.1
    remote: false
  - name: cephfs
    version: 15.2.14
    remote: true
  - name: dir
    version: "1"
    remote: false
  - name: lvm
    version: 2.03.07(2) (2019-11-30) / 1.02.167 (2019-11-30) / 4.45.0
    remote: false
  - name: zfs
    version: 2.0.6-1ubuntu3
    remote: false
  - name: ceph
    version: 15.2.14
    remote: true

Yes shiftfs is false for the new system and trur fo the old.
BTW an upload button could be added to this forum I say.

OK so in principle there should be no need for actual static remapping.

The volatile.idmap.current and volatile.idmap.next settings should be used to dynamically shift.

Out of interest, does it work if you import it into a dir based storage pool rather than btrfs?

Can you provide the specific permission errors and example directory listing outputs showing the problem please. As we are not clear what the problem is right now.

container imported on 22.04

cd /etc/prosody
gp@reunions:/etc/prosody$ ls -larn
total 20
-rw-r----- 1 0 122 9798 Jan 20  2020 prosody.cfg.lua
-rw-r--r-- 1 0   0  353 Jan 20  2020 migrator.cfg.lua
drwxr-xr-- 1 0 122   88 Jan 18 16:27 conf.d
drwxr-xr-- 1 0 122  126 Feb  2 20:01 conf.avail
drwxr-x--- 1 0 122  256 Jan 18 10:17 certs
-rw-r--r-- 1 0   0  292 Jan 20  2020 README
drwxr-xr-x 1 0   0 3270 Jan 29 14:33 ..
drwxr-xr-x 1 0   0  116 Jan 18 16:17 .
gp@reunions:/etc/prosody$ id prosody
uid=112(prosody) gid=120(prosody) groups=120(prosody),119(ssl-cert)

container imported on 20.04

 cd /etc/prosody
gp@reunions:/etc/prosody$ ls -larn
total 20
-rw-r----- 1 0 120 9798 Jan 20  2020 prosody.cfg.lua
-rw-r--r-- 1 0   0  353 Jan 20  2020 migrator.cfg.lua
drwxr-xr-- 1 0 120   88 Jan 18 16:27 conf.d
drwxr-xr-- 1 0 120  126 Feb  2 20:01 conf.avail
drwxr-x--- 1 0 120  256 Jan 18 10:17 certs
-rw-r--r-- 1 0   0  292 Jan 20  2020 README
drwxr-xr-x 1 0   0 3270 Jan 29 14:33 ..
drwxr-xr-x 1 0   0  116 Jan 18 16:17 .
gp@reunions:/etc/prosody$ id prosody
uid=112(prosody) gid=120(prosody) groups=120(prosody),119(ssl-cert)

As apt update postinst script will try to change files in this directory logged as the user ‘prosody’, it fails in the first case.

I restarted again the old Ubuntu 20.04

sudo nsenter -t $(pgrep daemon.start) -m -- ls /var/snap/lxd/common/lxd/containers/jitsistable/rootfs/etc/prosody -larn
total 20
-rw-r----- 1 0 120 9798 Jan 20  2020 prosody.cfg.lua
-rw-r--r-- 1 0   0  353 Jan 20  2020 migrator.cfg.lua
drwxr-xr-- 1 0 120   88 Jan 18 16:27 conf.d
drwxr-xr-- 1 0 120  126 Feb  2 20:01 conf.avail
drwxr-x--- 1 0 120  256 Jan 18 10:17 certs
-rw-r--r-- 1 0   0  292 Jan 20  2020 README
drwxr-xr-x 1 0   0 3270 Jan 29 14:33 ..
drwxr-xr-x 1 0   0  116 Jan 18 16:17 .

now it’s clearer. Looking at the hex dump of the exported file (I exported it again to make sure), if I find one of these files, I get

002A8C30   6C 75 61 00 00 00 00 00  00 00 00 00 00 00 00 00  lua.............
002A8C40   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
002A8C50   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
002A8C60   00 00 00 00 30 30 30 30  36 34 30 00 30 30 30 30  ....0000640.0000
002A8C70   30 30 30 00 30 30 30 30  31 37 30 00 30 30 30 30  000.0000170.0000
002A8C80   30 30 32 33 31 30 36 00  31 33 36 31 31 33 32 31  0023106.13611321
002A8C90   37 37 35 00 30 32 33 35  30 36 00 20 30 00 00 00  775.023506. 0...
002A8CA0   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
002A8CB0   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
002A8CC0   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
002A8CD0   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
002A8CE0   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
002A8CF0   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
002A8D00   00 75 73 74 61 72 00 30  30 72 6F 6F 74 00 00 00  .ustar.00root...
002A8D10   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
002A8D20   00 00 00 00 00 00 00 00  00 6C 70 61 64 6D 69 6E  .........lpadmin
002A8D30   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
002A8D40   00 00 00 00 00 00 00 00  00 30 30 30 30 30 30 30  .........0000000
002A8D50   00 30 30 30 30 30 30 30  00 00 00 00 00 00 00 00  .0000000........

From the tar spec (GNU tar 1.35: Basic Tar Format)

The magic field indicates that this archive was output in the P1003 archive format. If this field contains TMAGIC, the uname and gname fields will contain the ASCII representation of the owner and group of the file respectively. If found, the user and group IDs are used rather than the values in the uid and gid fields.

in this case, magic is indeed ‘ustar’. That means that the names are searched in the system (that is, the host in this case). If the ID are remapped, they will be over 100000 and wil never be found in the host. If they are not, tar will ignore the original uid an gid and map to what exists on the host. This is a very quick opinion I did not go in the depths of tar spec, but it seems likely to me.

Edit: I should have stressed that the numeric ID is right (170 octal = 120 dec) but the exported name for the group (lpadmin) don’t match the value in container (‘prosody’). So in the new host tar will search lpadmin in the host group file, find maybe 122 as gid.

OK so this doesn’t seem like a ID shifting issue at all, and more to do with how the files are unpacked to the correct uid/gid.

I wonder if this is a change in behaviour of tar in Ubuntu 22.04 and we need to provide a flag to force numeric ID unpacking.

Can you show using tar -ztvf export.tar.gz | grep /etc/prosody/ what IDs are stored in the tarball please.