LXC 3.0.3 resize ZFS pool

I have a LXC 3.0.3 version on Ubuntu 18.04.6 LTS. In which it has a default pool 100G. I want to increase it to be 200GB.

gpsemc@lxd15:~$ lxc storage show default
size: 100GB
source: /var/lib/lxd/disks/default.img
zfs.pool_name: default
description: “”
name: default
driver: zfs

  • /1.0/containers/***
  • /1.0/containers/xyz
  • /1.0/containers/Ubt20
  • /1.0/containers/Ubuntu1804
  • /1.0/containers/xyz1
  • /1.0/containers/VMWXUBUNTU20
  • /1.0/containers/buildEnvForXX
  • /1.0/containers/buildEnvForXXX2004
  • /1.0/containers/XZZ
  • /1.0/containers/zZZi
  • /1.0/images/4b350c31e7047b82533133c924f13bed8342f91b502ed9012395e4afbfebe9a7
  • /1.0/images/ade430c33554a285f2427abbecb7f216e54dc9bc640641f9b4bf1fe267447c28
  • /1.0/images/d58d3dcfc3d14ead8bf4b9ec85c3799b3fc5cb8e0b427e9d6813e226f9cee202
  • /1.0/images/e3e1bd82cdc7fa1256cf2409dd8543630eefa1fca631ff0c78c0970babddc69f
  • /1.0/images/e53fd879c785d18e7d4a8115dc57c45aa4104f339cac929e82670b9a721cc300
  • /1.0/profiles/default
    status: Created
  • none

I tried to resize it but seems not taken effect.

I run below command:
sudo truncate -s +100G /var/lib/lxd/disks/default.img

gpsemc@lxd15:~$ lxc storage list
| default | | zfs | /var/lib/lxd/disks/default.img | 16 |
| secondpool | | zfs | /var/lib/lxd/disks/secondpool.img | 0 | (Not used. I just want to see whether I can expand default or migrate to a secondpool)

gpsemc@lxd15:~$ sudo ls -l /var/lib/lxd/disks/
[sudo] password for gpsemc:
total 97521464
-rw------- 1 root root 207374182400 May 29 12:40 default.img The size looks growed here…
-rw------- 1 root root 200000000000 May 29 11:54 secondpool.img

It seems the /var/lib/lxd/disks/default.imgdisk has become 200G. But when I execute “lxc storage show default” it still show 100GB.

Any idea?

That’s normal, the size as reported by lxc storage show is the one recorded in the database at creation time, it will never increase even if the pool was correctly resized.

Though note that while you increased the backing file with the truncate command, you didn’t resize the zfs filesystem itself, so the size of the filesystem likely still is the old one (you can confirm with df -h inside of one of your containers).

This is run on the host system:

This is run inside one of the container:
gpsemc@lxd15:~$ lxc exec Chameleonr740 /bin/bash
root@Chameleonr740:~# df -h
Filesystem Size Used Avail Use% Mounted on
default/containers/Chameleonr740 4.5G 1.8G 2.7G 40% /
none 492K 0 492K 0% /dev
udev 63G 0 63G 0% /dev/tty
tmpfs 100K 0 100K 0% /dev/lxd
tmpfs 100K 0 100K 0% /dev/.lxd-mounts
tmpfs 63G 24K 63G 1% /dev/shm
tmpfs 63G 176K 63G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup

OK, I got some clue from this post and managed to resize the disk successfully: