Disk limits not showing in container

Hello,
My problem is that disk limits are not showing in the container. Using LXD 3.0.0 and btrfs storage pool.

I applied both through profile, and by using:
lxc config device add cont1 root disk pool=lxd path=/ size=10GB
lxc config show --expanded cont1
shows the following:

devices:
  eth0:
    name: eth0
    nictype: macvlan
    parent: eno1
    type: nic
  root:
    path: /
    pool: pciessd-storage
    size: 12GB
    type: disk

However, no matter how I set it,
df -h
in the container shows me the full disk size/usage where my storage pool is. (it shows root as /dev/loop2 if that helps).

What am I doing wrong?

That’s normal, btrfs doesn’t surface quotas in the disk usage it reports on its mounts.
The limit should still be enforced properly though (well, as well as btrfs enforces those anyways).

ZFS, LVM and CEPH will all get you a mount entry per container with the expected disk used/free values, the dir backend doesn’t do quotas and the btrfs backend does it per subvolume which don’t have their own mount entries, resulting in no used/free value in df.

Thank you for the info @stgraber!

I need to be able to view the container filesystem from the host. Right now I can in
/var/lib/lxd/storage-pools/{poolname}/containers/{cont_name}/rootfs

Am I able to do that with those other storage methods?

With the others, you’ll only see them if they’re running as they’re mounted on demand.
Unfortunately there’s no magic filesystem that does everything, otherwise we’d only support that :slight_smile:

Is there a way to manually mount a volume, say if I need to edit a file while the machine is stopped?

lxc file edit will let you edit files even when the container is stopped, it will mount/unmount stuff as needed. If that doesn’t work for you, you’ll need to look for the right dataset and run zfs mount for it.

1 Like