Initial owner on custom storage volume import

Hi all,

Want to start off by saying that I am really enjoying learning Incus and appreciative all the hard work that has gone into the project. Also, really looking forward to Incus OS!

I noticed an issue with permissions as I was testing the custom storage volume backup/restore process. I had created custom volumes with the initial.uid initial.gid and initial.mode parameters set which worked great to resolve some issues I had with a few OCI-based containers.

What I’m seeing now as I export/import these volumes is that the initial.* options are carried over on the import in the index but are not idmapped similarly to how they are on creation. I destroyed an instance I had built out, re-imported the volumes and received a bunch of permissions errors on the container startup. I tried testing it out to see what is happening:

Create a test instance and volume with the initial.* parameters set:

$ incus create images:debian/12/cloud -d root,size=1GiB -s default
Creating the instance
Instance name is: current-stud

$ incus storage volume create default test size=1GiB initial.uid=1000 initial.gid=1000 initial.mode=0700
Storage volume test created

$ incus storage volume attach default test current-stud /test
$ incus start current-stud

Permissions on the volume on the incus server:

root@incus-server:/var/lib/incus/storage-pools/default/custom# ls -l | grep default_test
drwx------ 3 1001000 1001000 4096 Feb 17 15:40 default_test

Export and destroy:

$ incus storage volume export default test
Backup exported successfully!
$ incus stop current-stud
$ incus rm current-stud
$ incus storage volume rm default test

New instance and import:

$ incus create images:debian/12/cloud -d root,size=1GiB -s default
Creating the instance
Instance name is: legal-pheasant
$ incus storage volume import default test backup.tar.gz
$ incus storage volume attach default test legal-pheasant /test
$ incus start legal-pheasant

Permissions on the volume on the Incus server

root@incus-server:/var/lib/incus/storage-pools/default/custom# ls -ln | grep default_test
drwx------ 3    1000    1000 4096 Feb 17 15:40 default_test

I tried restarting the instance but no change. I was able to resolve the issues by manually setting the owner on the volumes directly on the Incus server to 1001000 and the container started fine. Not a huge deal but wanted to see if anyone else had run into this issue. Is there something I’m missing on the import?

Here’s my incus info with a few lines sanitized. The storage sits on an LVM backed volume.

environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: []
  certificate_fingerprint: []
  driver: lxc | qemu
  driver_version: 6.0.3 | 9.0.4
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    uevent_injection: "true"
    unpriv_binfmt: "false"
    unpriv_fscaps: "true"
  kernel_version: 6.1.0-30-amd64
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Debian GNU/Linux
  os_version: "12"
  project: default
  server: incus
  server_clustered: false
  server_event_mode: full-mesh
  server_name: incus-server
  server_pid: 64713
  server_version: "6.9"
  storage: lvm
  storage_version: 2.03.16(2) (2022-05-18) / 1.02.185 (2022-05-18) / 4.47.0
  storage_supported_drivers:
  - name: dir
    version: "1"
    remote: false
  - name: lvm
    version: 2.03.16(2) (2022-05-18) / 1.02.185 (2022-05-18) / 4.47.0
    remote: false
  - name: lvmcluster
    version: 2.03.16(2) (2022-05-18) / 1.02.185 (2022-05-18) / 4.47.0
    remote: true

Can you see if security.shifted=true can be set on your storage volume?
If it can, then that should avoid the issue in this scenario and in the future.

Can you also show the full incus storage volume show POOL VOLUME output?
I’d expect that it has a record of the uid/gid map it had pre-export and may not have re-applied that correctly or didn’t pick it back up from the container.

That part is a bit messy in general (well, the whole on-disk uid/gid shifting is) but on modern systems we can basically avoid all that thanks to the containers using VFS idmap automatically (when the kernel supports it) and then the volumes being able to do the same when security.shifted is set to true.

1 Like

Thanks Stéphane, knew I was missing something. Setting security.shifted=true on the custom volumes did resolve the issue with the import.

Here’s what one of the current volumes looks like with the show command:

$ incus storage volume show default storage-volume-01
config:
  block.filesystem: ext4
  block.mount_options: discard
  initial.gid: "1000"
  initial.mode: "755"
  initial.uid: "1000"
  size: 50GiB
  volatile.idmap.last: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
description: ""
name: storage-volume-01
type: custom
used_by: []
location: none
content_type: filesystem
project: default
created_at: 2025-02-13T06:01:26.421747819Z

This one I exported, deleted and re-imported just now and I do see the that idmap is carried over and matches what was set before it was destroyed.