In my case, I use the ZFS storage backend on a partition (instead of a loop file).
There should be some other LXD command that shows the total size of the ZFS pool which I cannot find at the moment.
For the size discrepancy, it would make sense if, for example, you created the container and specified the max disk size. As it shown in https://stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
Such a thing would make sense if the max disk size you specified for the container, is smaller than the total ZFS pool size. Because at the end, a container can get at most that much free space that the pool has.
I desroyed one (REFER 22.7G) with lxc delete and zfs destroy afterwards, but available space didn’t increase.
zfs list | grep cntn1
NAME USED AVAIL REFER MOUNTPOINT
lxd/containers/cntn1 103G 6.59G 50.3G /var/lib/lxd/storage-pools/lxd/containers/cntn1
lxd/snapshots/cntn1 483K 1.05T 24K /lxd/snapshots/cntn1
lxd/snapshots/cntn1/stable_migrate 459K 1.05T 41.4G /var/lib/lxd/storage-pools/lxd/snapshots/cntn1/stable_migrate
Do you know why and how I can revert available space?
What I found, I could set root size plus 10Gb and set refresevation to 10Gb that there would add available space and I hope should freeze available space with 10Gb.