Resizing container's root drive results in wrong value reported inside container

Seeing two cases of when the container’s profile drive size which is backed by a ZFS pool is reporting a different size in the container:

Scenario 1:
Most of the time the container’s drive size reported by df -h is correct, however after changing the size multiple times, df -h reports either a slightly higher or lower value than what the container’s profile is set to. However resizing the container back to it’s original 10GB causes it to report the correct 10GB value inside the container.

Scenario 2:
I also have a container that was launched with 500GB drive, but is reporting only 467GB in the container. This container however has never been resized and is reporting a wrong value from the beginning.

Running 4.11/stable

Any clarification is appreciated, thank you!

Example of resizes showing wrong values:

Launch container with 10GB
root@test5:~# df -h
Filesystem              Size  Used Avail Use% Mounted on
pool0/containers/test5   10G  633M  9.4G   7% /

Resize container to 20GB
root@test5:~# df -h
Filesystem              Size  Used Avail Use% Mounted on
pool0/containers/test5   20G  633M   19G   4% /

Resize container to 5GB (shows slightly wrong value)
root@test5:~# df -h
Filesystem              Size  Used Avail Use% Mounted on
pool0/containers/test5  5.3G  633M  4.7G  12% /

Resize container to 3GB (shows .5GB more)
root@test5:~# df -h
Filesystem              Size  Used Avail Use% Mounted on
pool0/containers/test5  3.5G  633M  2.8G  19% /

Here are the ZFS properties for the container’s dataset after some of those changes.

root@box01:~$ zfs get all pool0/containers/test5 | grep quota
pool0/containers/test5  quota                 2.79G                                                                                   local
pool0/containers/test5  refquota              none

root@box01:~$ zfs get all pool0/containers/test5 | grep quota
pool0/containers/test5  quota                 9.31G                                                                                   local
pool0/containers/test5  refquota              none 

Example of container that was not resized but shows wrong value from the beginning:

Launched container with 500GB and it has never been resized but showing lower value.
root@app01:~# df -h
Filesystem                   Size  Used Avail Use% Mounted on
pool0/containers/app01  467G  2.5G  464G   1% /

In the above cause though it’s actually matching what is set on the ZFS quota:

zfs get all pool0/containers/app01 | grep quota
pool0/containers/app01  quota                 466G                                                                                            local
pool0/containers/app01  refquota              none

Try GiB instead of GB as that’s often what you want for storage.

Ah ok! Very interesting, so setting it to 5GiB in the profile results in:

Container shows:

root@test5:~# df -h
Filesystem              Size  Used Avail Use% Mounted on
pool0/containers/test5  5.7G  634M  5.0G  12% /

root@test5:~# df -H
Filesystem              Size  Used Avail Use% Mounted on
pool0/containers/test5  6.1G  665M  5.4G  12% /

ZFS shows:

zfs get all pool0/containers/test5 | grep quota
pool0/containers/test5  quota                 5G  

So ZFS now matches what we set in the profile. What accounts for the difference in the df commands versus what the ZFS dataset is set to? I apologize that’s probably a very simple question that I should understand.

Does the container have snapshots? That may be skewing the total usage.

Tested with two new containers below with no snapshots.

To clarify, I’m curious why df is reporting approx 1GiB of more space in Size than what the container was set with? Is this due to something with how df calculates total disk space?

I assumed that Size would show a total of 15GiB like the container’s profile was set to?

TLDR:

df command shows larger Size available than what container’s profile was set to. Ex: container set to 15Gib, df shows 16Gib.

Thank you for any clarification!

Test results:

Test 6: Ubuntu 20.04 set with 15GiB:

root@test6:~# df -h
Filesystem              Size  Used Avail Use% Mounted on
pool0/containers/test6   16G  633M   15G   4% /

zfs get quota pool0/containers/test6
NAME                    PROPERTY  VALUE  SOURCE
pool0/containers/test6  quota     15G    local

Also created a container with an image that’s never been used before to make sure it’s nothing off with how ZFS accounts for that:

root@test7:~# df -h
Filesystem              Size  Used Avail Use% Mounted on
pool0/containers/test7   16G  399M   15G   3% /

and without -h

root@test7:~# df
Filesystem             1K-blocks   Used Available Use% Mounted on
pool0/containers/test7  16127232 408320  15718912   3% /

ZFS Shows the correct value that was set in the profile:

zfs get quota pool0/containers/test7
NAME                    PROPERTY  VALUE  SOURCE
pool0/containers/test7  quota     15G    local

Profile for container:

lxc profile show test7
config:
  limits.cpu: "2"
  limits.memory: 1GB
  security.devlxd: "false"
  security.idmap.isolated: "true"
  security.nesting: "false"
description: ""
devices:
  root:
    path: /
    pool: default
    size: 15GiB
    type: disk
name: test7
used_by:
- /1.0/instances/test7