I have a few test Win11 VMs whose root device I had to increase to 110 GB to install the 24h2 update.
After I completed the Win update and cleaned up the installation, I shrunk the Windows partition back down to 50 GB. Is there any way to shrink or replace the root disk of a vm?
I’ve already tried dumping the partitions with Clonezilla, creating a new empty vm with a small root device, adding a custom volume and restoring to that custom volume. When I boot with this custom volume (boot.priority=10) in the console, the system seems to boot, but ends up with a black screen.
Does anyone have a simpler procedure? Does anyone have any ideas?
Would it be ok to replace the zvol directly using zfs commands?
Incus really doesn’t like shrinking VM block devices because of the obvious risk of data loss
That said, you could (very very carefully) change the value directly in ZFS with zfs set volsize, then once that’s done, you will need to update the DB directly to reflect the new value.
Most likely you can check that you only have one VM with 110GB by doing incus admin sql global "SELECT * FROM instances_devices_config WHERE value='110GB' and if you do, then use incus admin sql global "UPDATE instances_devices_config SET value='50GB' WHERE value='110GB'
That worked more or less as expected. The volume is reduced and the block device gets the expected size.
The only irregularity I still have is that the disk usage is still displayed incorrectly showing the old size:
reducing block device size (be carefully !!!):
zfs get volsize hdd1/virtual-machines/win11-intra-old.block
NAME PROPERTY VALUE SOURCE
hdd1/virtual-machines/win11-intra-old.block volsize 110G local
zfs set volsize=65GB hdd1/virtual-machines/win11-intra-old.block
modifying size value in incus db:
incus admin sql global "\
select i.name as instance_name, d.name as device_name, c.* from instances i \
join instances_devices d on (i.id = d.instance_id and i.name='win11-intra-old' and d.name='root') \
join instances_devices_config c on (c.instance_device_id=d.id)"
+-----------------+-------------+------+--------------------+------+--------+
| INSTANCE NAME | DEVICE NAME | ID | INSTANCE DEVICE ID | KEY | VALUE |
+-----------------+-------------+------+--------------------+------+--------+
| win11-intra-old | root | 9024 | 2851 | path | / |
+-----------------+-------------+------+--------------------+------+--------+
| win11-intra-old | root | 9025 | 2851 | pool | hdd1 |
+-----------------+-------------+------+--------------------+------+--------+
| win11-intra-old | root | 9026 | 2851 | size | 110GiB |
+-----------------+-------------+------+--------------------+------+--------+
| win11-intra-old | root | 9023 | 2851 | type | disk |
+-----------------+-------------+------+--------------------+------+--------+
incus admin sql global "update instances_devices_config set value='65GiB' where id=9026";
Rows affected: 1
incus list win11-intra-old -c nDb
+-----------------+------------+--------------+
| NAME | DISK USAGE | STORAGE POOL |
+-----------------+------------+--------------+
| win11-intra-old | 109.07GiB | hdd1 |
+-----------------+------------+--------------+
incus storage volume list -c nuU hdd1 name=win11-ecm4utest-old
+---------------------+---------+----------+
| NAME | USED BY | USAGE |
+---------------------+---------+----------+
| win11-ecm4utest-old | 1 | 81.65GiB |
+---------------------+---------+----------+
I already restarted incus.service to rule out any caching issues but the disk_usage is still the same.
zfs only deals with power-of-two units (e.g. GiB) as far as I can tell. From the zfsprops(7) manpage:
The values of numeric properties can be specified using human-readable suffixes (for example, k, KB, M, Gb, and so forth, up to Z for zettabyte). The following are all valid (and equal) specifications: 1536M, 1.5g, 1.50GB.
What’s the zfs get all VOLUME for win11-intra-old?
We don’t really cache anything so I’d have expected that disk usage to come straight out of ZFS somehow.
found it:
the volume still had snapshots created by snapshots.schedule on the enlarged volsize. zfs list hdd1/virtual-machines/win11-intra-old.block showed 107 GB
Once I deleted these snapshots everything is as expected.
While reading zfs docs I realized that reducing a zfs vol has also heavy consequences when trying to restore a snapshot if not manually increase the volsize.
To make the long story short:
zfs set volsize and manually updating the incus db (s. above) works as expected
any prior created snapshot should be either deleted or handled with caution not to restore without making sure to increase the volsize before