Again: Issues to expand a container's volume in a LD 3.x LVM pool

According to previous posts the issue is resolved but for me it still doesn’t work flawlessly.

I use LXC 3.13 and created a LVM pool and a container using that pool and thin provisioned LV created in the launch process by LXD.The initial max size is 10GB by default, which is fine for many cases.

I used LVM tools to expand the logical volume and the XFS filesystem as advised in previous posts as LXD provides no tools to to expand a LV. The container does recognise it immediately and inside the container everything seems OK.

[root@host-default ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/lxdvg/containers_host–default 50G 457M 50G 1% /
none 492K 0 492K 0% /dev
none 10M 0 10M 0% /sys/fs/cgroup
devtmpfs 3.9G 0 3.9G 0% /dev/tty
tmpfs 100K 0 100K 0% /dev/lxd
tmpfs 100K 0 100K 0% /dev/.lxd-mounts
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 8.1M 3.9G 1% /run

Nevertheless, a 'lxc file push …* of a 15 GB backup file into the container fails: “… no space left on device …” after about 6 GB. And all LXD meta data still report the initial size of 10 GB:

[root@hosting ~]# lxc storage volume show lvmpool container/host-default
config:
block.filesystem: xfs
block.mount_options: discard
size: 10GB
description: “”
name: host-default
type: container
used_by:

  • /1.0/containers/host-default
    location: none

The LXC tools I’m aware to to set various configuration options refuse to modify e.g. the “size:” value

But I can copy the directly into the container
cp /root/jms-transfer.tar /var/lib/lxd/storage-pools/lvmpool/containers/host-default/rootfs/

I could change the ownership:
chown 1000000.1000000 jms*

and use / unpack the tar, use the files without issues so far and the container filesystem noticed the change perfectly:
[root@host-default ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/lxdvg/containers_host–default 50G 30G 21G 59% /
none 492K 0 492K 0% /dev
none 10M 0 10M 0% /sys/fs/cgroup
devtmpfs 3.9G 0 3.9G 0% /dev/tty
tmpfs 100K 0 100K 0% /dev/lxd
tmpfs 100K 0 100K 0% /dev/.lxd-mounts
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 8.1M 3.9G 1% /run

But I don’t think that workaround is an appropriate way for everyday work.

So my questions are:
Is it a bug or is it just cosmetical, because the storage meta data do not affect the daily operation in any way?
How can I adjust the meta data?
Or is it safe to use the container with wrong meta data in production (nothing fancy, just standard web and mail service)?

LXD does know how to grow LV/xfs, simply override the disk device on your container and set the expected size for this.

This is usually either:

lxc config device override CONTAINER root size=50GB

or

lxc config device set CONTAINER root size 50GB

This will then require a container restart to be effective.

LXD doesn’t use the recorded size value other than if then performing a clean resize or for reporting purposes, so you should be fine with what you did manually.