Storage volume unmount after lxc stop


I can see storage volume is unmounted after stopping lxc. This issue occurs only after upgrading lxd version to 3.18.

lxc version

Client version: 3.18
Server version: 3.18

before lxc stop
root@cpu-xxxx:/data/vmxxxx# ls

after lxc stop
root@cpu-xxxx:/data/vmxxxx# ls

Can you help me with this?

Hmm, that’s odd.

What’s your lxc config show --expanded output for the container?

Hello @stgraber,

Thank you for your response. Here is the result.

root@cpu-xxxx:~# lxc config show --expanded vmxxxx
architecture: x86_64
  image.architecture: amd64
  image.description: Debian stretch amd64 (20191107_05:24)
  image.os: Debian
  image.release: stretch
  image.serial: "20191107_05:24"
  image.type: squashfs
  limits.cpu: "4"
  limits.cpu.allowance: 0%
  limits.cpu.priority: "0"
  limits.memory: 32GB
  limits.memory.swap: "false"
  limits.processes: "5000"
  volatile.base_image: c605365053eec9ae266d6ffafd6858d48d164fcd09f7f39057c6ec2478338283
  volatile.eth0.host_name: veth46291a8b
  volatile.eth0.hwaddr: 00:16:3e:b0:ca:42 eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]' '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: RUNNING
    nictype: bridged
    parent: lxdbr0
    type: nic
    path: /
    pool: default
    size: 10GB
    type: disk
    path: /data
    pool: default
    source: vmxxxxx
    type: disk
ephemeral: false
- default
stateful: false
description: ""

Any help would be much appreciated.

Thank you

Can you also show lxc storage show default?

For custom volumes, they will normally be mounted when the container starts and unmounted when it stops, so the vmxxxxx custom volume should be mounted/unmounted based on that, same goes for the root volume of the container too, but nothing else should get unmounted.

Hello @stgraber,

Thank you for your response. When i used lxd version 3.15, no issues with the unmout. Storage volume was present when i stopped the container in lxd version 3.15. But in version 3.18, storage volume unmounted when i stopped the container.

root@cpu-xxxx:~# lxc storage show default
  source: default
  volatile.initial_source: /dev/md4
  zfs.pool_name: default
description: ""
name: default
driver: zfs
- /1.0/containers/vmxxx
- /1.0/profiles/default
- /1.0/storage-pools/default/volumes/custom/vmxxxx
status: Created
- none

Thank you

Ah yeah, I think we changed that logic when @tomp rewrote the device handling logic.
It was an oversight that we kept the custom volumes mounted when no container was actively using them.
We have similar unmount logic for containers and so have since fixed that logic to also unmount the custom storage volumes on container shutdown.

So just like accessing a container, if you need a custom volume to be mounted when containers aren’t using it, you’ll need to manually run zfs mount.

Thank you @stgraber, it works!!

I manually mounted when the container was in stopped state and were able to do that without any issues.

Thank you