How to change the disk space of a virtual machine

Hello.
I tried to extend the root of the virtual machine using the following method, but it failed.
First, use the lxc command to specify the size of qemu.

lxc init ubuntu:20.04 test-vm -p network_bridge --vm 
lxc config device override test-vm root size=32GiB

The state of the virtual machine after execution is as follows.

lxc config show test-vm 
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20200921.1)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20200921.1"
  image.type: disk-kvm.img
  image.version: "20.04"
  volatile.apply_template: create
  volatile.base_image: a8dd6af5ec53b8d6baa3df8a92fc69b38967732de3f8602d1dfc18b2b6cedf18
  volatile.eth0.hwaddr: 00:16:3e:a8:e2:f3
devices:
  root:
    path: /
    pool: default
    size: 32GiB
    type: disk
ephemeral: false
profiles:
- network_bridge
stateful: false
description: ""

It is invoked to expand the internal file system.

lxc start test-vm 
lxc exec test-vm  bash

Immediately after entering, the condition is as follows
The current file system is 9GB, which is to be expected, but the lack of 32GB partitioning is not to be expected.

root@test-vm:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            484M     0  484M   0% /dev
tmpfs            99M  548K   99M   1% /run
/dev/sda1       8.9G  1.2G  7.7G  14% /
tmpfs           494M     0  494M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           494M     0  494M   0% /sys/fs/cgroup
/dev/sda15      105M  3.9M  101M   4% /boot/efi
config          9.4G  6.2M  9.4G   1% /run/lxd_config/9p
/dev/loop0       31M   31M     0 100% /snap/snapd/9279
/dev/loop1       56M   56M     0 100% /snap/core18/1885
/dev/loop2       71M   71M     0 100% /snap/lxd/16922
root@test-vm:~# cat /proc/partitions 
major minor  #blocks  name

   8        0    9765624 sda
   8        1    9651943 sda1
   8       14       4096 sda14
   8       15     108544 sda15
   7        0      30992 loop0
   7        1      56648 loop1
   7        2      72256 loop2

Check the size of the zfs and it tells me 9GB.

sudo zfs get volsize default/virtual-machines/test-vm.block 
NAME                                    PROPERTY  VALUE    SOURCE
default/virtual-machines/test-vm.block  volsize   9.31G    local

Stop the virtual machine, change the area directly from zfs, and then boot it.

lxc stop test-vm
sudo zfs set volsize=32GB default/virtual-machines/test-vm.block 
lxc start test-vm

Then 32 GB was allocated to the virtual machine.

root@test-vm:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        31G  1.2G   30G   4% /
devtmpfs        493M     0  493M   0% /dev
tmpfs           494M     0  494M   0% /dev/shm
tmpfs            99M  536K   99M   1% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           494M     0  494M   0% /sys/fs/cgroup
/dev/sda15      105M  3.9M  101M   4% /boot/efi
/dev/loop0       31M   31M     0 100% /snap/snapd/9279
/dev/loop1       71M   71M     0 100% /snap/lxd/16922
/dev/loop2       56M   56M     0 100% /snap/core18/1885
config          9.4G  6.2M  9.4G   1% /run/lxd_config/9p
root@test-vm:~# cat /proc/partitions 
major minor  #blocks  name

   8        0   33554432 sda
   8        1   33440751 sda1
   8       14       4096 sda14
   8       15     108544 sda15
   7        0      30992 loop0
   7        1      72256 loop1
   7        2      56648 loop2

However, I don’t think this is a legitimate way to do it. lxd recommended disk resizing, how do you do it?

environment

environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate:
  certificate_fingerprint: 
  driver: lxc
  driver_version: 4.0.4
  firewall: xtables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    netnsid_getifaddrs: "false"
    seccomp_listener: "false"
    seccomp_listener_continue: "false"
    shiftfs: "false"
    uevent_injection: "true"
    unpriv_fscaps: "true"
  kernel_version: 4.19.0-11-amd64
  lxc_features:
    cgroup2: "true"
    devpts_fd: "false"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "false"
  os_name: Debian GNU/Linux
  os_version: "10"
  project: default
  server: lxd
  server_clustered: false
  server_name: server1
  server_pid: 14043
  server_version: "4.6"
  storage: zfs
  storage_version: 0.8.4-2~bpo10+1

There was a bug in LXD which was causing this issue.

We’ve since fixed it and you’ll find the fix in the candidate channel right now.
As we don’t push fixes on Fridays or over the weekend, this will roll out to users on Monday or Tuesday (Monday is a public holiday in North America so we may prefer Tuesday).

Thank you for telling me.
I’m waiting for the update.

Until then, feel free to run snap refresh lxd --candidate, that will get you the fix immediately, then once the fix is in stable you can go back to it with snap refresh lxd --stable.

I was able to increase the disk correctly with the lxc command only after updating lxc with snap.
Thank you for the clear explanation!