ZFS storage pool does not match VM root size?

So I’m a tad confused about something. I created a zfs storage pool with the size of 20GB and created a new VM instance to use that pool.

lxc launch xxx vmname -s storagepool --vm

  driver: zfs
  name: vmname
  space used: 450.81MiB
  total space: 17.92GiB

But when I check out the size on the VM.

/dev/root       9.6G  814M  8.8G   9% /
tmpfs           489M     0  489M   0% /dev/shm
tmpfs           196M  552K  195M   1% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            50M   14M   37M  27% /run/lxd_agent
/dev/sda15      105M  6.1M   99M   6% /boot/efi

Only 10GB is set, and storage usage differ as well. Not having this issue with containers. Is this normal?

The VM disk is probably the correct size (check with cat /proc/partitions) but the partition may need to be grown and resized.

You can do that with the growpart and resize2fs tools.

Thanks, but is there any reason why it’s not using the disk size set when I created the storage? And why the usage differs as well in storage pool and within the VM. Not sure if this is intended and or a bug.

It’s probably because the VM image you’re using doesn’t run growpart+resize2fs on boot.
Most of our images do that automatically on first boot.

The usage outside not lining up with the usage inside is normal.
It can be lower due to how ZFS handles copy on write, but it can also be higher due to the underlying disk growing with the VM’s use but not necessarily being able to shrink in the same way when space is freed up inside of the VM.

Ah, that explains it. Thank you! I was using ubuntu-minimal:jammy