Cannot remove ZFS storage if non empty?

Running 3.19 from snap. I am moving all lxd ZFS storages under an “lxd” subset.

# lxc storage list
+---------+-------------+--------+-----------+---------+
|  NAME   | DESCRIPTION | DRIVER |  SOURCE   | USED BY |
+---------+-------------+--------+-----------+---------+
| data    |             | zfs    | data      | 0       |
+---------+-------------+--------+-----------+---------+
| data1   |             | zfs    | data1/lxd | 0       |
+---------+-------------+--------+-----------+---------+
| default |             | zfs    | vm/lxd    | 1       |
+---------+-------------+--------+-----------+---------+
| vm      |             | zfs    | vm        | 0       |
+---------+-------------+--------+-----------+---------+

Volumes:

# lxc storage volume list vm
+------+------+-------------+---------+
| TYPE | NAME | DESCRIPTION | USED BY |
+------+------+-------------+---------+

Deleting the storage:

# lxc storage delete vm     
Error: ZFS pool has leftover datasets: lxd/containers

Also I don’t need the “data” zpool (but using it for something else, has data):

# lxc storage delete data 
Error: ZFS pool has leftover datasets: backups/libvirt

Can you show zfs list -t all?

Normally we have a logic which skips “expected” datasets and only complains if something unexpected is present.

So it might be a bug ?

# zfs list -t all | grep vm
vm                                                                           38,2G  40,3G    20K  /var/lib/snapd/hostfs/mnt/vm
vm/libvirt                                                                     24K  40,3G    24K  /var/lib/snapd/hostfs/mnt/vm/libvirt
vm/lxd                                                                        240K  40,3G    24K  none
vm/lxd/containers                                                              24K  40,3G    24K  none
vm/lxd/custom                                                                  24K  40,3G    24K  none
vm/lxd/deleted                                                                120K  40,3G    24K  none
vm/lxd/deleted/containers                                                      24K  40,3G    24K  none
vm/lxd/deleted/custom                                                          24K  40,3G    24K  none
vm/lxd/deleted/images                                                          24K  40,3G    24K  none
vm/lxd/deleted/virtual-machines                                                24K  40,3G    24K  none
vm/lxd/images                                                                  24K  40,3G    24K  none
vm/lxd/virtual-machines                                                        24K  40,3G    24K  none
vm/opnsense                                                                  38,1G  73,3G  4,90G  -
vm/opnsense@autosnap_2020-02-03_14:45:00_monthly                                0B      -  4,71G  -
vm/opnsense@autosnap_2020-02-03_14:45:00_daily                                  0B      -  4,71G  -
vm/opnsense@autosnap_2020-02-04_03:20:13_daily                               10,2M      -  4,69G  -
vm/opnsense@autosnap_2020-02-04_11:00:03_hourly                              5,56M      -  4,68G  -
vm/opnsense@autosnap_2020-02-04_12:00:02_hourly                              3,02M      -  4,68G  -
vm/opnsense@autosnap_2020-02-04_13:00:00_hourly                              2,75M      -  4,68G  -
vm/opnsense@autosnap_2020-02-04_14:00:03_hourly                              3,11M      -  4,91G  -
vm/opnsense@autosnap_2020-02-04_15:00:02_hourly                              2,96M      -  4,91G  -
vm/opnsense@autosnap_2020-02-04_16:00:01_hourly                              2,86M      -  4,91G  -
vm/opnsense@autosnap_2020-02-04_17:00:00_hourly                              2,90M      -  4,90G  -
vm/opnsense@autosnap_2020-02-04_18:00:00_hourly                              3,03M      -  4,90G  -
vm/opnsense@autosnap_2020-02-04_19:00:03_hourly                              2,99M      -  4,90G  -
vm/opnsense@syncoid_serveur_2020-02-05:06:56:19                              1,88M      -  4,91G  -
vm/opnsense@autosnap_2020-02-04_20:00:02_hourly                              1,85M      -  4,91G  -
vm/opnsense@autosnap_2020-02-04_21:00:03_hourly                              2,92M      -  4,91G  -
vm/opnsense@autosnap_2020-02-04_21:30:00_frequently                          3,31M      -  4,90G  -
vm/opnsense@autosnap_2020-02-04_21:45:02_frequently                          3,46M      -  4,90G  -
vm/opnsense@autosnap_2020-02-04_22:00:02_hourly                                 0B      -  4,90G  -
vm/opnsense@autosnap_2020-02-04_22:00:02_frequently                             0B      -  4,90G  -
vm/opnsense@autosnap_2020-02-04_22:15:00_frequently                          3,52M      -  4,90G  -
vm/snapshots                                                                   19K  40,3G    19K  none
# lxc storage delete vm
Error: ZFS pool has leftover datasets: lxd/containers

Yeah, I’ll have to take a look at that logic, there must be a bug in there.

Ah no, the logic is actually correct.

Your vm storage pool maps to the vm zpool.
That zpool contains:

  • libvirt
  • lxd/*

Neither of which are expected datasets, expected datasets would be for example:

  • vm/containers

Not:

  • vm/lxd/containers

You seem to have been putting both the root of the pool (vm) and a dataset from that pool (vm/lxd) as separate datasets in LXD, causing this behavior.

In your case, you’d need to:

  • lxc storage delete default
  • Manually delete vm/libvirt
  • Then you could delete vm

Remember that LXD assumes complete ownership of whatever ZFS pool/dataset you give it, if you later on create other stuff in there, you may have a bad day.

Your setup of having both the root of the pool (vm) and a dataset (vm/lxd) will lead to problems such as this one.