lxd 5.11-ad0b61e 24483 latest/stable canonical✓ -
images:ubuntu/jammy initiated through Api, without wait option.
Image is pulled,
lxc ls
| jam3 | STOPPED | | | VIRTUAL-MACHINE | 0
zfs list
zp3/pl2/virtual-machines/101_jam3 6.97M 93.0M 6.97M legacy
zp3/pl2/virtual-machines/101_jam3.block 10M 6.04T 10M -
After starting, it disappears from lxc list.
lxd log:
time=“2023-03-20T16:43:56+01:00” level=warning msg=“Error getting disk usage” err=“Failed to run: zfs get -H -p -o value used zp3/pl2/virtual-machines/101_jam3.block: exit status 1 (cannot open ‘zp3/pl2/virtual-machines/101_jam3.block’: dataset does not exist)” instance=jam3 instanceType=virtual-machine project=101
time=“2023-03-20T16:48:55+01:00” level=error msg=“Failed to advertise vsock address” err=“Failed sending VM sock address to lxd-agent: Failed to fetch https://custom.socket/1.0: 401 Unauthorized” instance=jam3 instanceType=virtual-machine project=101
time=“2023-03-20T16:53:46+01:00” level=warning msg=“Failed getting instance metrics” err=“dial unix /var/snap/lxd/common/lxd/logs/101_jam3/qemu.monitor: connect: no such file or directory” instance=jam3 project=101
It seems, it starts too early running zfs get value, as after a while doing same zfs query manually gives the right final size of block:
zfs get -H -p -o value used zp3/pl2/virtual-machines/101_jam3.block
427563008
But at this moment instance has already vanished from lxc list.
BTW those zfs sets persist as ghost and cant be removed:
sudo zfs destroy zp3/pl2/virtual-machines/101_jam3.block
cannot destroy ‘zp3/pl2/virtual-machines/101_jam3.block’: dataset is busy
sudo zfs destroy zp3/pl2/virtual-machines/101_jam2
cannot destroy ‘zp3/pl2/virtual-machines/101_jam2’: dataset is busy
Got a lot of those ghost zfs sets, waiting for Server to be powered off and tossed away.