Consistent Resizing of ZFS Storage Pools

lxd version 4.0.8 via snap on Ubuntu 20.04.3, kernel 5.4.0

I initially set the zfs pool default to 1TB. When I resized it, I did

lxc storage set default volume.size 1800GB
lxc storage set default size 1800GB

PS: Is it correct to assume there are no differences between volume.size and size when you have a single volume in the storage pool?

Then I had to

sudo truncate -s +800G /var/snap/lxd/common/lxd/disks/default.img`
sudo zpool set autoexpand=on default
sudo zpool online -e default /var/lib/lxd/disks/default.img
sudo zpool set autoexpand=off default

Turns out I can truncate it to any size I wanted to, regardless of disk space. While it would be interesting to hear why that is possible, I am more interested in making it consistent between the profile settings and zfs. This is not consistent (1.05TB + 775G is not 1800GB)


I’d like to make the two match at 1.8TB, and leave the rest for the host system.

volume.size refers to the size of volumes that will be created on your pool, this should definitely not be set to the entire pool or it would make it quite easy for a single instance to use all the space :slight_smile:

You should probably just lxc storage unset default volume.size and let LXD handle the default in your case.

The size property can also be completely ignored. It’s effectively set to the size which was originally requested by the user at the time the pool was created but is otherwise currently completely ignored by LXD.

1 Like

Merci, Stéphane!

So I can truncate up and down as desired, it is up to me to keep it under the physical max? Similar to thin provisioned LVM?

and are these steps necessary after truncating? I am drawing these steps from this post

sudo zpool set autoexpand=on default
sudo zpool online -e default /var/lib/lxd/disks/default.img
sudo zpool set autoexpand=off default

Also, what is the default max per volume? I have some with hundreds of GB, and the smallest one is just shy of 50GB

ZFS didn’t use to support shrinking at all. I believe this may have changed recently but I still very much would not recommend it.

On ZFS, containers and storage volumes are effectively unlimited unless configured otherwise, virtual machines get a 10GiB volume.

1 Like

but volume size auto expands as needed, right? seems to be the behavior now, all CTs > 10GiB

Out of the box there are no quotas set on the filesystem datasets as used for containers and custom storage volumes.

The only thing that’s fixed size in ZFS are volumes and those are only used for block based custom volumes and for VMs. Those default to 10GiB, they can be grown later by manually overriding their size, they however cannot be shrunk.

1 Like