Resize container's volume

Hello People :slight_smile:

I’ve been trying to set a size limit to one of my container. Unfortunately everytime I try to do such thing, like that :
sudo lxc config device set mail2 root size 50GB

I receive this kind of error :
Error: The device doesn't exist

even thou mail2 exist and I didn’t change anything about its volume :confused:

The thing is that I tried to get more about the connected volumes this way :
sudo lxc config device show mail2

but it shows me some kind of list without any kind of disk or anything:

lxdbr0:
  nictype: bridged
  parent: lxdbr0
  type: nic

So im kinda confused right now :confused: and ofc the container runs properly.

I use ZFS as the backend storage

ofc I use the LXD 3.13.

Containers have some devices directly assigned to them and some inherited from a profile.

In your case, your root device comes from a profile.

To override it with a container-local device that sets the quota you want, you can do:

lxc config device override mail2 root size=50GB

Well now it seems to work but it says something like this :

Error: Failed to set ZFS config: cannot set property for ‘storage/containers/mail2’: size is less than current used or reserved space

Even though it uses less than 5 GB :confused:

admin@websrv1:~$ sudo lxc exec mail2 – df -h
Filesystem Size Used Avail Use% Mounted on
storage/containers/mail2 1.9T 2.0G 1.9T 1% /
none 492K 0 492K 0% /dev
udev 7.8G 0 7.8G 0% /dev/tty
tmpfs 100K 0 100K 0% /dev/lxd
tmpfs 100K 0 100K 0% /dev/.lxd-mounts
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 505M 7.4G 7% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
tmpfs 1.6G 0 1.6G 0% /run/user/999

I suppose I cannot actually reduce its size because of a ZFS restriction ? :confused:

I tried that again ( sudo lxc config device override mastodon root size=50GB), after of course stopping the whole container and it keeps giving me this kind of error :

Error: Failed to set ZFS config:

sudo lxc info mastodon also gives me this kind of result :

Disk usage:
root: 163.37GB

even though sudo lxc exec mastodon bash -- df -h reports this :

Filesystem Size Used Avail Use% Mounted on
storage/containers/mastodon 2.0T 17G 1.9T 1% /
none 492K 4.0K 488K 1% /dev
udev 7.8G 0 7.8G 0% /dev/tty
tmpfs 100K 0 100K 0% /dev/lxd
/dev/md2 2.7T 2.5T 92G 97% /data
tmpfs 100K 0 100K 0% /dev/.lxd-mounts
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 156K 7.8G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup

Not sure if it applies to zfs, but this is how I do it via BTRFS:

Set root device to initial value:

lxc config device override my_container root size=2GB

To change the root disk size AFTER setting initial value:

lxc config device set my_container root size 1GB

I just verified the above works on LXD 3.15 running on Debian 10.

nah the first one keeps giving me a Error: The device doesn't exist even if it does

Well what I finally did was creating a new smaller pool and moving in all my containers. Did the job.

For ZFS, use the following method given in the LXD doc to change the size:

Growing a loop backed ZFS pool

LXD doesn’t let you directly grow a loop backed ZFS pool, but you can do so with:

sudo truncate -s +5G /var/lib/lxd/disks/<POOL>.img
sudo zpool set autoexpand=on <POOL>
sudo zpool online -e lxd /var/lib/lxd/disks/<POOL>.img
sudo zpool set autoexpand=off <POOL>

NOTES:

  1. For users of snap, use /var/snap/lxd/common/lxd/ instead of /var/lib/lxd/
  2. Pool is default by default.

Refer to Growing a loop backed ZFS pool - LXD Doc.