Hi all,
Want to start off by saying that I am really enjoying learning Incus and appreciative all the hard work that has gone into the project. Also, really looking forward to Incus OS!
I noticed an issue with permissions as I was testing the custom storage volume backup/restore process. I had created custom volumes with the initial.uid
initial.gid
and initial.mode
parameters set which worked great to resolve some issues I had with a few OCI-based containers.
What I’m seeing now as I export/import these volumes is that the initial.*
options are carried over on the import in the index but are not idmapped similarly to how they are on creation. I destroyed an instance I had built out, re-imported the volumes and received a bunch of permissions errors on the container startup. I tried testing it out to see what is happening:
Create a test instance and volume with the initial.*
parameters set:
$ incus create images:debian/12/cloud -d root,size=1GiB -s default
Creating the instance
Instance name is: current-stud
$ incus storage volume create default test size=1GiB initial.uid=1000 initial.gid=1000 initial.mode=0700
Storage volume test created
$ incus storage volume attach default test current-stud /test
$ incus start current-stud
Permissions on the volume on the incus server:
root@incus-server:/var/lib/incus/storage-pools/default/custom# ls -l | grep default_test
drwx------ 3 1001000 1001000 4096 Feb 17 15:40 default_test
Export and destroy:
$ incus storage volume export default test
Backup exported successfully!
$ incus stop current-stud
$ incus rm current-stud
$ incus storage volume rm default test
New instance and import:
$ incus create images:debian/12/cloud -d root,size=1GiB -s default
Creating the instance
Instance name is: legal-pheasant
$ incus storage volume import default test backup.tar.gz
$ incus storage volume attach default test legal-pheasant /test
$ incus start legal-pheasant
Permissions on the volume on the Incus server
root@incus-server:/var/lib/incus/storage-pools/default/custom# ls -ln | grep default_test
drwx------ 3 1000 1000 4096 Feb 17 15:40 default_test
I tried restarting the instance but no change. I was able to resolve the issues by manually setting the owner on the volumes directly on the Incus server to 1001000
and the container started fine. Not a huge deal but wanted to see if anyone else had run into this issue. Is there something I’m missing on the import?
Here’s my incus info
with a few lines sanitized. The storage sits on an LVM backed volume.
environment:
addresses: []
architectures:
- x86_64
- i686
certificate: []
certificate_fingerprint: []
driver: lxc | qemu
driver_version: 6.0.3 | 9.0.4
firewall: nftables
kernel: Linux
kernel_architecture: x86_64
kernel_features:
idmapped_mounts: "true"
netnsid_getifaddrs: "true"
seccomp_listener: "true"
seccomp_listener_continue: "true"
uevent_injection: "true"
unpriv_binfmt: "false"
unpriv_fscaps: "true"
kernel_version: 6.1.0-30-amd64
lxc_features:
cgroup2: "true"
core_scheduling: "true"
devpts_fd: "true"
idmapped_mounts_v2: "true"
mount_injection_file: "true"
network_gateway_device_route: "true"
network_ipvlan: "true"
network_l2proxy: "true"
network_phys_macvlan_mtu: "true"
network_veth_router: "true"
pidfd: "true"
seccomp_allow_deny_syntax: "true"
seccomp_notify: "true"
seccomp_proxy_send_notify_fd: "true"
os_name: Debian GNU/Linux
os_version: "12"
project: default
server: incus
server_clustered: false
server_event_mode: full-mesh
server_name: incus-server
server_pid: 64713
server_version: "6.9"
storage: lvm
storage_version: 2.03.16(2) (2022-05-18) / 1.02.185 (2022-05-18) / 4.47.0
storage_supported_drivers:
- name: dir
version: "1"
remote: false
- name: lvm
version: 2.03.16(2) (2022-05-18) / 1.02.185 (2022-05-18) / 4.47.0
remote: false
- name: lvmcluster
version: 2.03.16(2) (2022-05-18) / 1.02.185 (2022-05-18) / 4.47.0
remote: true