I can't attach storage volume to container

Hi,
I was able to attach it to one container but to others not :frowning: There is error:

root@ds4~# lxc storage volume attach lxc_pool wspolny lucek /mnt/wspolny
Error: Failed to start device “wspolny”: Failed shifting storage volume “wspolny” of type “custom” on storage pool “lxc_pool”: Idmaps of container and storage volume are not identical
root@ds4~#

Config for that container looks like this:
root@ds4~# lxc config show lucek
architecture: x86_64
config:
image.architecture: amd64
image.description: Ubuntu focal amd64 (20200522_07:42)
image.name: ubuntu-focal-amd64-default-20200522_07:42
image.os: ubuntu
image.release: focal
image.serial: “20200522_07:42”
image.variant: default
limits.memory: 1024MB
security.privileged: “false”
volatile.base_image: ba2f0fd1bc593c485b22dfaac2e5c865294f384cef74f7974431f030a59df745
volatile.eth0.host_name: veth11f01f4c
volatile.eth0.hwaddr: 00:16:3e:d1:21:12
volatile.idmap.base: “0”
volatile.idmap.current: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.power: RUNNING
volatile.uuid: 2b2ceb2c-749a-4bb3-be7e-0b3164a0696f
devices:
root:
path: /
pool: lxc_pool
size: 120GB
type: disk
ephemeral: false
profiles:

  • default
    stateful: false
    description: “”

And config for container which it works on:
root@ds4~# lxc config show stomatolog-biz
architecture: x86_64
config:
image.architecture: amd64
image.description: Ubuntu focal amd64 (20200522_07:42)
image.name: ubuntu-focal-amd64-default-20200522_07:42
image.os: ubuntu
image.release: focal
image.serial: “20200522_07:42”
image.variant: default
limits.memory: 1400MB
raw.apparmor: mount fstype=nfs,
security.privileged: “true”
volatile.base_image: e7d9054e077d78757a79363696e3891e655d609216ac0f90e300b93a981acdbf
volatile.eth0.host_name: veth4bc39c86
volatile.eth0.hwaddr: 00:16:3e:46:8a:aa
volatile.idmap.base: “0”
volatile.idmap.current: ‘[]’
volatile.idmap.next: ‘[]’
volatile.last_state.idmap: ‘[]’
volatile.last_state.power: RUNNING
volatile.uuid: 44602edc-0c51-4fa9-8db3-a0117f54d32d
devices:
root:
path: /
pool: lxc_pool
size: 220GB
type: disk
wspolny:
path: /mnt/wspolny
pool: lxc_pool
source: wspolny
type: disk
ephemeral: false
profiles:

  • default
    stateful: false
    description: “”
    root@ds4~#

Could somebody help ?

@brauner is this expected that you cannot attach a custom volume to multiple containers due to shifting uid/gid differences?

Yes, whenever the container use conflicting idmappings.

Would that be if using isolated mode so each container has its own mappings?

Is there any solution for this ?
I think mounting volume/folder from host to containers is very basic feature which should work without problems.

It does work normally, here’s an example:

lxc launch images:ubuntu/focal c1 -s zfs
lxc launch images:ubuntu/focal c2 -s zfs
lxc storage volume create zfs myvol
lxc storage volume attach zfs myvol c1 /mnt/myvol
lxc storage volume attach zfs myvol c2 /mnt/myvol
lxc config show c1
lxc config show c1 --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu focal amd64 (20210302_07:42)
  image.os: Ubuntu
  image.release: focal
  image.serial: "20210302_07:42"
  image.type: squashfs
  image.variant: default
  volatile.base_image: e88e00f8cc6312c328093106faf3a7145200bdea4f76619e27f53fcdac86210c
  volatile.eth0.host_name: vethbed92877
  volatile.eth0.hwaddr: 00:16:3e:51:1f:fc
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 41cd2706-74d0-4858-acb5-28fafdd60c9d
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  myvol:
    path: /mnt/myvol
    pool: zfs
    source: myvol
    type: disk
  root:
    path: /
    pool: zfs
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
lxc config show c2 --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu focal amd64 (20210302_07:42)
  image.os: Ubuntu
  image.release: focal
  image.serial: "20210302_07:42"
  image.type: squashfs
  image.variant: default
  volatile.base_image: e88e00f8cc6312c328093106faf3a7145200bdea4f76619e27f53fcdac86210c
  volatile.eth0.host_name: veth219c6441
  volatile.eth0.hwaddr: 00:16:3e:f7:8a:e7
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 050b0a60-5a2a-4fda-b643-a6e35e115124
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  myvol:
    path: /mnt/myvol
    pool: zfs
    source: myvol
    type: disk
  root:
    path: /
    pool: zfs
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

In your case your using security.privileged=true on stomatolog-biz so it will not have an unprivileged uid/gid range, and as such attaching the storage volume to both containers cannot be done while maintaining consistent file UIDs on both systems.

Try using the shiftfs mode on the custom volume, e.g.

lxc storage volume set zfs myvol security.shifted=true

This allows me to use the same custom volume on both privileged and unprivileged containers, because it doesn’t attempt to alter the ownership of the custom volumes files to match the unprivileged container’s UID range and instead uses the shiftfs kernel module to perform the shifting dynamically.

So after your comments I’ve done this in this way:

  1. I deattached volume from stomatolog-biz
  2. the proof that volume isn’t attached to any container is following:
    root@ds4~# lxc storage volume list lxc_pool
    | custom | wspolny | | filesystem | 0 |
  3. and I’ve set security.privileged false for stomatolog-biz

So now both conatiners lucek and stomatolog-biz are unprivileged, volume wspolny isn’t attached to any container and when I tried to attach it to lucek I have the same error:

root@ds4~# lxc storage volume attach lxc_pool wspolny lucek /mnt/wspolny
Error: Failed to start device “wspolny”: Failed mounting storage volume “wspolny” of type “custom” on storage pool “lxc_pool”: Failed to run: zfs mount lxc_pool/custom/default_wspolny: cannot mount ‘lxc_pool/custom/default_wspolny’: filesystem already mounted
root@ds4~#

Why ?

Try sudo grep custom/default_wspolny /proc/*/mountinfo

I had it mounted on host system so I unmounted him.
Now I mounted him on lucek - great…its workiing.
I tried mount him on stomatalog-biz: same error with mounting:
root@ds4~# lxc storage volume attach lxc_pool wspolny stomatolog-biz /mnt/wspolny
Error: Failed to start device “wspolny”: Failed mounting storage volume “wspolny” of type “custom” on storage pool “lxc_pool”: Failed to run: zfs mount lxc_pool/custom/default_wspolny: cannot mount ‘lxc_pool/custom/default_wspolny’: filesystem already mounted

So the conclusion is volume can be mounted just on one container ?

My goal is to have some shared folder on host system and in all containers heh.