NAME USED AVAIL REFER MOUNTPOINT
storage-pool 2.08T 623G 39.3K /storage-pool
storage-pool/k8s 39.3K 623G 39.3K /storage-pool/k8s
storage-pool/lxd 384G 623G 39.3K /mnt/tmp/
I dont have a quota, refquota, reservation or resreservation set either on any ZFS (sub)volume. The VM’s configured root disk size is 30GB, the backup itself is 2.2GB.
System details
uname -a:
Linux artemis 5.4.0-89-generic #100-Ubuntu SMP Fri Sep 24 14:50:10 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
zfs -V:
zfs-0.8.3-1ubuntu12.9
zfs-kmod-0.8.3-1ubuntu12.12
lxc --version:
4.20
The original export was created with lxc export on a btrfs filesystem, which wasn’t full either nor does it use quotas. lxc copy fails with the same error (this is what I initially tried).
Hi @dutchy76. Did you check the diskspace inside the lxd namespace? sudo nsenter -t $(cat /var/snap/lxd/common/lxd.pid) -m
then you can see the space used on the storage backend per df -h
If using snap, else the path for cat will be different
After entering the LXD namespace and checking with df -h for the usage of the storage-pool filesystem, it is shown to have 1% usage with 624GB of free space left.
The failure would likely have caused the dataset to be deleted so not much to look at.
I’d recommend posting zfs list -t all as well as running the command during the lxc copy or lxc import to get a better idea of what’s running out of space.
I’ve given that a go. What seems to be happening is that the VM gets allocated a block of storage, which doesn’t grow past 95.5 MB for some reason, eventually leaving the VM’s allocated storage with 0B free.
I’ve tried manually creating the subvolume, which isn’t possible as LXD tries to as well. I’ve also tried to reserve space using zfs set reservation=5G <subvolume>, but this fails with size is greater than available space which is odd to me.
So the issue seems to be on the metadata volume. After removing it’s quota, it seems to be importing fine.
However the metadata volume seemed oddly large, coming it at just over 300MB, whereas my other VMs didnt go much bigger than 10MB. After mounting that dataset, the largest file is the state file, the VM has stateful migration enabled.
The VM was not running no. The config did have migration.stateful: true on shutdown though, which I have since removed, resulting in a successful export and re-import (lxc copy succeeds too).
This is the config as it is now, the only difference being that migration.stateful has been removed:
OK so looks like at some point you’ve performed a stateful stop and then disabled migration.stateful, which has left the stateful state file in place. Then this has been included in the export, but there’s not enough room to recreate it when importing it because the root disk device doesn’t have size.state set to a large enough value.
My suggestion would be:
Start the source VM again - with migration.stateful off this should remove the state file.
Export the VM again.
Import it this time without the state file should work.
@stgraber should LXD perhaps remove the state file if migration.stateful is disabled?