Storage Pool "Unavailable" After Server Restart

I have a container with a server in it and it had been working fine for months. About 2 days ago, I ran OS upgrade commands (Ubuntu 22.04, with sudo apt upgrade ..., sudo snap refresh, etc.). After all the updates were done, I shutdown the container and the restarted the host server.

When the server came back online, the container did not. Trying to run lxc start <name> now returns Error: Storage pool "lxd" unavailable on this server. I’ve had several cases before where LXD just breaks itself for no apparent reason and while I don’t know if it’s done it again, it certainly seems like it did(?).

No ZFS or LXC commands were used before the restart, etc., so I’m not sure why the storage pool would suddenly become “unavailable”.

Some system info that might be helpful:

$ lxc storage list
+------+--------+----------+-------------+---------+-------------+
| NAME | DRIVER |  SOURCE  | DESCRIPTION | USED BY |    STATE    |
+------+--------+----------+-------------+---------+-------------+
| lxd  | zfs    | tank/lxd |             | 2       | UNAVAILABLE |
+------+--------+----------+-------------+---------+-------------+

# Version Info
$ lxc --version
5.6

# Using the snap package
$ which lxc
/snap/bin/lxc

# Snap Info
$ snap list
Name    Version      Rev    Tracking       Publisher   Notes
core20  20220826     1623   latest/stable  canonical✓  base
lxd     5.6-794016a  23680  latest/stable  canonical✓  -
snapd   2.57.2       17029  latest/stable  canonical✓  snapd

# Profile Info
$ lxc profile show default
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: macvlan
    parent: eno1
    type: nic
  root:
    path: /
    pool: lxd
    type: disk
name: default
used_by:
- /1.0/instances/arma

$ lxc profile show arma
config:
  boot.autostart: "true"
  limits.cpu.allowance: 95%
  limits.memory: 8GB
description: Arma3 dedicated server profile.
devices:
  eth0:
    name: eth0
    nictype: macvlan
    parent: eno1
    type: nic
name: arma
used_by:
- /1.0/instances/arma

ZFS Pool info, etc:

$ zfs list
NAME                                USED  AVAIL     REFER  MOUNTPOINT
tank                                127G  1.63T     42.0K  /mnt/tank
tank/lxd                            127G  1.63T     48.0K  /mnt/tank/lxd
tank/lxd/containers                 127G  1.63T     42.0K  /mnt/tank/lxd/containers
tank/lxd/containers/arma            127G  1.63T      127G  legacy  # <<--- This auto-changed between reboot
tank/lxd/custom                    42.0K  1.63T     42.0K  /mnt/tank/lxd/custom
tank/lxd/deleted                    216K  1.63T     48.0K  /mnt/tank/lxd/deleted
tank/lxd/deleted/containers        42.0K  1.63T     42.0K  /mnt/tank/lxd/deleted/containers
tank/lxd/deleted/custom            42.0K  1.63T     42.0K  /mnt/tank/lxd/deleted/custom
tank/lxd/deleted/images            42.0K  1.63T     42.0K  /mnt/tank/lxd/deleted/images
tank/lxd/deleted/virtual-machines  42.0K  1.63T     42.0K  /mnt/tank/lxd/deleted/virtual-machines
tank/lxd/images                    42.0K  1.63T     42.0K  /mnt/tank/lxd/images
tank/lxd/virtual-machines          42.0K  1.63T     42.0K  /mnt/tank/lxd/virtual-machines

Notice the legacy mountpoint above, which became that way after the restart without any input from me.

Running

$ sudo zfs set mountpoint=/mnt/tank/lxd/containers/arma tank/lxd/containers/arma

Changes the list above to:

$ tank/lxd/containers/arma            127G  1.63T      127G  /mnt/tank/lxd/containers/arma

But it does not make it come back either and looks empty when inspected with ls. The pool itself is online and fine, according to zpool status -v tank.

So what happened to the storage pool and why would it happen? How can I fix/recover this so that I can start the container and get my server back online again?.. :frowning:

Thanks in advance.

LXD will set the dataset’s mountpoint to legacy now so that it controls the mount point rather than the ZFS subsystem, thats normal.

Please can you show lxc warning list and the output of /var/snap/lxd/common/lxd/logs/lxd.log?

The content you asked for is below, but I have some additional questions

LXD will set the dataset’s mountpoint to legacy now so that it controls the mount point rather than the ZFS subsystem, thats normal.

When did it start doing this? I’ve rebooted this server after upgrades for months and I never noticed this happening (and if it did, then it didn’t seem to cause any issues until a few days ago).

Also, why is LXD changing config settings/options, seemingly, without telling me or prompting me about it? I know I didn’t originally set it up that way, because I wanted to make sure that LXD was using the specific ZFS pool I had created for it, not some other random/hidden/hard-to-figure-out location of its own choosing where I wouldn’t really know where my data was being stored…

Lastly, how is the term legacy supposed to inform the user of what you mentioned? The term there makes it look like it’s somehow out of date, deprecated, or otherwise not “correct”.

Warnings list below:

$ lxc warning list
+--------------------------------------+------------------------------------------------------+--------+----------+-------+---------+-------------------------------+
|                 UUID                 |                         TYPE                         | STATUS | SEVERITY | COUNT | PROJECT |           LAST SEEN           |
+--------------------------------------+------------------------------------------------------+--------+----------+-------+---------+-------------------------------+
| 25ea39d5-3ade-4ecb-8934-2b8bb1be8259 | Couldn't find the CGroup network priority controller | NEW    | LOW      | 14    |         | Oct 16, 2022 at 1:22am (UTC)  |
+--------------------------------------+------------------------------------------------------+--------+----------+-------+---------+-------------------------------+
| 30f482fb-57f3-4b0d-9e9e-07ae776a0865 | Storage pool unavailable                             | NEW    | HIGH     | 30251 |         | Oct 18, 2022 at 12:58am (UTC) |
+--------------------------------------+------------------------------------------------------+--------+----------+-------+---------+-------------------------------+

The log file’s contents (last few lines only; 2800+ similar-looking lines):

$ sudo cat /var/snap/lxd/common/lxd/logs/lxd.log
time="2022-10-17T18:38:18-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set acltype=posixacl xattr=sa atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:39:18-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set setuid=on exec=on devices=on acltype=posixacl xattr=sa atime=off relatime=on mountpoint=legacy tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:40:18-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl xattr=sa tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:41:18-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set setuid=on exec=on devices=on acltype=posixacl xattr=sa atime=off relatime=on mountpoint=legacy tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:42:18-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl xattr=sa tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:43:18-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set devices=on acltype=posixacl xattr=sa atime=off relatime=on mountpoint=legacy setuid=on exec=on tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:44:18-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl xattr=sa tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:45:18-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl xattr=sa tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:46:18-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set setuid=on exec=on devices=on acltype=posixacl xattr=sa atime=off relatime=on mountpoint=legacy tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:47:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl xattr=sa tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:48:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set devices=on acltype=posixacl xattr=sa atime=off relatime=on mountpoint=legacy setuid=on exec=on tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:49:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl xattr=sa tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:50:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl xattr=sa tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:51:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set devices=on acltype=posixacl xattr=sa atime=off relatime=on mountpoint=legacy setuid=on exec=on tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:52:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl xattr=sa tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:53:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set xattr=sa atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:54:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl xattr=sa atime=off tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:55:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl xattr=sa atime=off relatime=on tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:56:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set xattr=sa atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:57:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl xattr=sa atime=off tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:58:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl xattr=sa atime=off relatime=on tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T18:59:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set acltype=posixacl xattr=sa atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-17T19:00:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set xattr=sa atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd

Note that the above log output is after the zfs command setting the mountpoint option explicitly, as documented in the original post. I rebooted the system again, to see if it would go back to legacy as it had done in previous attempts (and, maybe, produce a different log), but it didn’t go back to legacy again. Not sure if this makes a difference.

Thanks.

OK so:

time="2022-10-17T18:59:19-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set acltype=posixacl xattr=sa atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd

Suggests that it is having trouble unmounting the pool top-level dataset (as a result of trying to set mountpoint=legacy) probably because its still in use outside of the LXD snap’s mount namespace).

This confusion around mount points when using the snap package is one of (but not the only reason) why we switched to using mountpoint=legacy to avoid ZFS also mounting the datasets on the host’s mount namespace too.

This seems to be the same issue as, which was caused by a change in LXD 5.6 that started applying policies we use for new zpools to existing zpools, see:

We’ve since relaxed that somewhat by making a failure to apply those policies a warning rather than an error:

https://github.com/lxc/lxd/pull/10975

This way it will get applied on next reboot.

The fix on the previous post was to manually set the top level dataset’s mountpount to legacy:

To answer your questions:

  1. A LXD ZFS pool is made up of several datasets. A top level data set, and then sub datasets for each images and instance volume. LXD originally managed the mountpoint for these datasets directly using ZFS’s mountpoint setting. However this was problematic because ZFS was then controlling the mount and this caused issues when running LXD inside the snap package’s mount namespace. See https://github.com/lxc/lxd-pkg-snap/issues/61#issuecomment-962657700. So LXD has been setting mountpoint=legacy on instance datasets for sometime (since LXD 4.24 see lxd/storage/drivers/driver/zfs: Set all dataset mountpoint settings t… · lxc/incus@56846c0 · GitHub). However the change to apply the same policies to the top-level dataset arrived in LXD 5.6
  2. LXD uses its own storage pools and so does from time to time deploy new policies. In this case the reason was to try and workaround issues in the way ZFS manages mounts from inside mount namespaces.
  3. LXD doesn’t choose the term “legacy” this is a name from the ZFS subsystem, but it is what is used to prevent ZFS from managing the mounts and instead allowing LXD to use the normal (what ZFS calls legacy) mount commands to manage the mounts.

LXD has never supported using custom mount points for its datasets inside a storage pool. LXD is still mounting the datasets at the same point it did before its just not using zfs mount command to do it.

The fix on the previous post was to manually set the top level dataset’s mountpount to legacy

Are you sure about that? I had mentioned that when it first failed, it was already set to legacy and that rebooting the system (before my OP) was also showing the ZFS mountpoint set to legacy already, even though I hadn’t set it manually.

It was during my troubleshooting that I noticed it had set itself to legacy at some point (it wasn’t the original working config). LXD was already failing to launch the container before I set the mountpoint to the dataset.

Manually setting it back to legacy did not solve the problem either:

$ sudo zfs set mountpoint=legacy tank/lxd/containers/arma
$ zfs list
NAME                                USED  AVAIL     REFER  MOUNTPOINT
tank                                127G  1.63T     42.0K  /mnt/tank
tank/lxd                            127G  1.63T     48.0K  /mnt/tank/lxd
tank/lxd/containers                 127G  1.63T     42.0K  /mnt/tank/lxd/containers
tank/lxd/containers/arma            127G  1.63T      127G  legacy
tank/lxd/custom                    42.0K  1.63T     42.0K  /mnt/tank/lxd/custom
tank/lxd/deleted                    216K  1.63T     48.0K  /mnt/tank/lxd/deleted
tank/lxd/deleted/containers        42.0K  1.63T     42.0K  /mnt/tank/lxd/deleted/containers
tank/lxd/deleted/custom            42.0K  1.63T     42.0K  /mnt/tank/lxd/deleted/custom
tank/lxd/deleted/images            42.0K  1.63T     42.0K  /mnt/tank/lxd/deleted/images
tank/lxd/deleted/virtual-machines  42.0K  1.63T     42.0K  /mnt/tank/lxd/deleted/virtual-machines
tank/lxd/images                    42.0K  1.63T     42.0K  /mnt/tank/lxd/images
tank/lxd/virtual-machines          42.0K  1.63T     42.0K  /mnt/tank/lxd/virtual-machines
$ lxc start arma
Error: Storage pool "lxd" unavailable on this server

More logging info:

$ lxc warning list                                                                                                                                                           1 ↵
+--------------------------------------+------------------------------------------------------+--------+----------+-------+---------+------------------------------+
|                 UUID                 |                         TYPE                         | STATUS | SEVERITY | COUNT | PROJECT |          LAST SEEN           |
+--------------------------------------+------------------------------------------------------+--------+----------+-------+---------+------------------------------+
| 25ea39d5-3ade-4ecb-8934-2b8bb1be8259 | Couldn't find the CGroup network priority controller | NEW    | LOW      | 15    |         | Oct 18, 2022 at 1:06am (UTC) |
+--------------------------------------+------------------------------------------------------+--------+----------+-------+---------+------------------------------+
| 30f482fb-57f3-4b0d-9e9e-07ae776a0865 | Storage pool unavailable                             | NEW    | HIGH     | 30660 |         | Oct 18, 2022 at 7:51am (UTC) |
+--------------------------------------+------------------------------------------------------+--------+----------+-------+---------+------------------------------+

The the last few lines from the log file:

$ time="2022-10-18T01:47:50-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set setuid=on exec=on devices=on acltype=posixacl xattr=sa atime=off relatime=on mountpoint=legacy tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-18T01:48:50-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set xattr=sa atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-18T01:49:50-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set devices=on acltype=posixacl xattr=sa atime=off relatime=on mountpoint=legacy setuid=on exec=on tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-18T01:50:50-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set devices=on acltype=posixacl xattr=sa atime=off relatime=on mountpoint=legacy setuid=on exec=on tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd
time="2022-10-18T01:51:50-06:00" level=error msg="Failed mounting storage pool" err="Failed to run: zfs set atime=off relatime=on mountpoint=legacy setuid=on exec=on devices=on acltype=posixacl xattr=sa tank/lxd: exit status 1 (cannot unmount '/var/lib/snapd/hostfs/mnt/tank/lxd': pool or dataset is busy)" pool=lxd

Even rebooting the server after manually setting it back to legacy didn’t change the above error.

I’ll spin up a fresh VM with a snap and setup a LXD ZFS pool on an existing data set to show the config for a fresh pool and it may help identify the differences.

So setting up a fresh Ubuntu Jammy VM with a custom block volume attached for the ZFS pool (which will be /dev/sdb inside the VM):

lxc init images:ubuntu/jammy v1 --vm
lxc storage volume create default vol1 --type=block
lxc storage volume attach default vol1 v1
lxc config show v1
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu jammy amd64 (20221014_07:42)
  image.os: Ubuntu
  image.release: jammy
  image.serial: "20221014_07:42"
  image.type: disk-kvm.img
  image.variant: cloud
  volatile.base_image: 46912c71ebb5680803fa552982dbf96b72a822587cddae9135c3a74020c06218
  volatile.cloud-init.instance-id: 1d0b0b7b-8e95-48dc-802c-d3f5defac89e
  volatile.eth0.host_name: tap22b48d12
  volatile.eth0.hwaddr: 00:16:3e:8b:b5:da
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: e5e32747-a72e-4225-a05d-9940e5e96c1d
  volatile.vsock_id: "200"
devices:
  vol1:
    pool: default
    source: vol1
    type: disk
ephemeral: false
profiles:
- default
- vmzfs
stateful: false
description: ""
lxc start v1

Then inside the VM lxc shell v1:

apt install zfsutils-linux
zpool create mypool /dev/sd
zfs list
NAME     USED  AVAIL     REFER  MOUNTPOINT
mypool   100K  9.20G       24K  /mypool

mount | grep mypool
mypool on /mypool type zfs (rw,xattr,noacl)
snap install lxd
lxd init --auto
lxc storage create zfs zfs source=mypool/lxd
Storage pool zfs created

zfs list
NAME                                  USED  AVAIL     REFER  MOUNTPOINT
mypool                                620K  9.20G       24K  /mypool
mypool/lxd                            288K  9.20G       24K  legacy
mypool/lxd/buckets                     24K  9.20G       24K  legacy
mypool/lxd/containers                  24K  9.20G       24K  legacy
mypool/lxd/custom                      24K  9.20G       24K  legacy
mypool/lxd/deleted                    144K  9.20G       24K  legacy
mypool/lxd/deleted/buckets             24K  9.20G       24K  legacy
mypool/lxd/deleted/containers          24K  9.20G       24K  legacy
mypool/lxd/deleted/custom              24K  9.20G       24K  legacy
mypool/lxd/deleted/images              24K  9.20G       24K  legacy
mypool/lxd/deleted/virtual-machines    24K  9.20G       24K  legacy
mypool/lxd/images                      24K  9.20G       24K  legacy
mypool/lxd/virtual-machines            24K  9.20G       24K  legacy

mount | grep mypool
mypool on /mypool type zfs (rw,xattr,noacl)

So we can see that the mypool dataset that doesn’t belong to the LXD storage pool still has a mountpoint set and is mounted to /mypool, but the LXD datasets from mypool/lxd downward have their mountpoints set to legacy and are not mounted in the host’s namespace.

Launching a container on the ZFS pool we can see its dataset getting created:

lxc launch images:ubuntu/jammy c1 -s zfs

zfs list mypool/lxd/containers/c1
NAME                       USED  AVAIL     REFER  MOUNTPOINT
mypool/lxd/containers/c1  7.21M  8.73G      482M  legacy

Then deleting the storage pool leaves just the original zpool dataset left:

lxc delete c1 -f
lxc storage delete zfs
Storage pool zfs deleted

zfs list
NAME     USED  AVAIL     REFER  MOUNTPOINT
mypool   286K  9.20G       24K  /mypool

So I suspect that is the default mount points on the datasets from tank/lxd downward (inclusive) that is causing the confusion when trying to unmount them from inside the snap mount namespace.

Set those manually to legacy should then bring them into sync.

Thanks for the help. I think it’d be an improvement if the logging messages were more helpful to the user. The log messages that I had available did not really help me figure it out on my own (i.e. I had to bother people here, etc.), nor did they ever lead me in the right direction.

1 Like