Storage volume unmount after lxc stop

Hello,

I can see storage volume is unmounted after stopping lxc. This issue occurs only after upgrading lxd version to 3.18.

lxc version

Client version: 3.18
Server version: 3.18

before lxc stop
root@cpu-xxxx:/data/vmxxxx# ls
servicexxxx

after lxc stop
root@cpu-xxxx:/data/vmxxxx# ls
root@cpu-xxxx:/data/vmxxxx#

Can you help me with this?

Hmm, that’s odd.

What’s your lxc config show --expanded output for the container?

Hello @stgraber,

Thank you for your response. Here is the result.

root@cpu-xxxx:~# lxc config show --expanded vmxxxx
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Debian stretch amd64 (20191107_05:24)
  image.os: Debian
  image.release: stretch
  image.serial: "20191107_05:24"
  image.type: squashfs
  limits.cpu: "4"
  limits.cpu.allowance: 0%
  limits.cpu.priority: "0"
  limits.memory: 32GB
  limits.memory.swap: "false"
  limits.processes: "5000"
  volatile.base_image: c605365053eec9ae266d6ffafd6858d48d164fcd09f7f39057c6ec2478338283
  volatile.eth0.host_name: veth46291a8b
  volatile.eth0.hwaddr: 00:16:3e:b0:ca:42
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: RUNNING
devices:
  eth0:
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    size: 10GB
    type: disk
  vmxxxxx:
    path: /data
    pool: default
    source: vmxxxxx
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

Any help would be much appreciated.

Thank you
Sujai

Can you also show lxc storage show default?

For custom volumes, they will normally be mounted when the container starts and unmounted when it stops, so the vmxxxxx custom volume should be mounted/unmounted based on that, same goes for the root volume of the container too, but nothing else should get unmounted.

Hello @stgraber,

Thank you for your response. When i used lxd version 3.15, no issues with the unmout. Storage volume was present when i stopped the container in lxd version 3.15. But in version 3.18, storage volume unmounted when i stopped the container.

root@cpu-xxxx:~# lxc storage show default
config:
  source: default
  volatile.initial_source: /dev/md4
  zfs.pool_name: default
description: ""
name: default
driver: zfs
used_by:
- /1.0/containers/vmxxx
- /1.0/profiles/default
- /1.0/storage-pools/default/volumes/custom/vmxxxx
status: Created
locations:
- none

Thank you
Sujai

Ah yeah, I think we changed that logic when @tomp rewrote the device handling logic.
It was an oversight that we kept the custom volumes mounted when no container was actively using them.
We have similar unmount logic for containers and so have since fixed that logic to also unmount the custom storage volumes on container shutdown.

So just like accessing a container, if you need a custom volume to be mounted when containers aren’t using it, you’ll need to manually run zfs mount.

Thank you @stgraber, it works!!

I manually mounted when the container was in stopped state and were able to do that without any issues.

Thank you
Sujai