Disk size in config (lxc info cntn1) is bigger than into LXC container (df -h into cntn1)

I have lxd lxc version 3.0.1 (Ubuntu 17.10)

Could you explain, please?

Disk size in container config is 110GB

  root:
    path: /
    pool: lxd
    size: 110GB
    type: disk

But disk size into container is 59Gb (result of command df -h into the container)

Filesystem             Size  Used Avail Use% Mounted on
lxd/containers/cntn1   59G   50G  9.4G  85% /

Hi Victor!

This is a LXD question and I switched the category from LXC to LXD.

Which command did you use to get the following output? What is your storage backend? Normally there is no size: field.

  root:
    path: /
    pool: lxd
    size: 110GB
    type: disk

Hi. Thanks.
I used the command:
lxc info cntn1

I use zfs. Is it correct answer about storage backend? or I have to add some information?

 lxc storage list
+------+-------------+--------+--------+---------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
+------+-------------+--------+--------+---------+
| lxd  |             | zfs    | lxd    | 254     |
+------+-------------+--------+--------+---------+

When I run lxc info mycontainer, I get a list of resources that are being used in the specific container, not a total size of the pool.

If I run instead

$ lxc storage show mypool
config:
  source: lxd
  volatile.initial_source: /dev/sda1
  zfs.pool_name: mypool
...

In my case, I use the ZFS storage backend on a partition (instead of a loop file).
There should be some other LXD command that shows the total size of the ZFS pool which I cannot find at the moment.

For the size discrepancy, it would make sense if, for example, you created the container and specified the max disk size. As it shown in https://stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
Such a thing would make sense if the max disk size you specified for the container, is smaller than the total ZFS pool size. Because at the end, a container can get at most that much free space that the pool has.

Thanks.
I saw

zfs list -o space | grep cntn1
NAME                               AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  
USEDCHILD
lxd/containers/cntn1               6.98G   103G     53.3G   49.7G             0B         0B
lxd/snapshots/cntn1                1.05T   590K        0B     24K             0B       566K
lxd/snapshots/cntn1/cntn1_snap     1.05T   107K        0B    107K             0B         0B
lxd/snapshots/cntn1/stable_migrate 1.05T   459K        0B    459K             0B         0B

there were snapshots a reason

zfs list | grep cntn1
NAME                               USED  AVAIL  REFER  MOUNTPOINT
lxd/containers/cntn1               104G  6.46G  50.2G  /var/lib/lxd/storage- 
pools/lxd/containers/cntn1
lxd/snapshots/cntn1                590K  1.05T    24K  /lxd/snapshots/cntn1
lxd/snapshots/cntn1/cntn1_snap     107K  1.05T  22.7G  /var/lib/lxd/storage-pools/lxd/snapshots/cntn1/cntn1_snap
lxd/snapshots/cntn1/stable_migrate 459K  1.05T  41.4G  /var/lib/lxd/storage-pools/lxd/snapshots/cntn1/stable_migrate

I desroyed one (REFER 22.7G) with lxc delete and zfs destroy afterwards, but available space didn’t increase.

zfs list | grep cntn1
NAME                               USED  AVAIL  REFER  MOUNTPOINT
lxd/containers/cntn1               103G  6.59G  50.3G  /var/lib/lxd/storage-pools/lxd/containers/cntn1
lxd/snapshots/cntn1                483K  1.05T    24K  /lxd/snapshots/cntn1
lxd/snapshots/cntn1/stable_migrate 459K  1.05T  41.4G  /var/lib/lxd/storage-pools/lxd/snapshots/cntn1/stable_migrate

Do you know why and how I can revert available space?

What I found, I could set root size plus 10Gb and set refresevation to 10Gb that there would add available space and I hope should freeze available space with 10Gb.