Strategy for dealing with deleted images on zfs

I’m a new incus user, and I really like it! I run it on a Debian 13 server with zfs.

I have a growing number of deleted images:

zroot/homelab/deleted/images/8a5bcc49c3c5cd9d657fc1adae7c5c0f0be18e400ed4cea8bb16727aced05fa5         246M   162G   246M  legacy
zroot/homelab/deleted/images/a6e41121e5d0b4978c6754f8206d364eb65e8a137636e2363706fcdb3795a706         224K   500M   216K  legacy
zroot/homelab/deleted/images/a6e41121e5d0b4978c6754f8206d364eb65e8a137636e2363706fcdb3795a706.block  85.8M   162G  85.7M  -
zroot/homelab/deleted/images/b9afaebf358bcc7d848e6bf1e734378d0601ebc6dae3d5defa92cb8fb9c57082         246M   162G   246M  legacy
zroot/homelab/deleted/images/c1c1dffd8c3408a5f86142f4289eb5c3a0b98e3df9c5f475486de66ddcbd8110         246M   162G   246M  legacy
zroot/homelab/deleted/images/ccb077357ac522b662162814f7444df2a55723c86766223122eaa2f8d0d4e044         224K   500M   216K  legacy
zroot/homelab/deleted/images/ccb077357ac522b662162814f7444df2a55723c86766223122eaa2f8d0d4e044.block  86.4M   162G  86.4M  -

Now, it’s not a cause for immediate concern yet, but my pool is a zfs mirror on top of two 250GB disks, so space is not infinite. I’ve read through this thread from 2018, and I mostly understand why this happens, but I’m wondering what the long term solution is here.

Do I really need to export and re-import containers to be able to delete these? How are others dealing with this?

Would be nice with some ‘canonical’ way to deal with this.

It’s just an internal detail and not something you should do anything about.

ZFS uses copy on write, so we import an image, then we clone it into your new container and VM and then only changes made from that point are recorded as part of the container or VM dataset.

This means that having 50 containers or VMs created from the same image saves you all the shared blocks from the original image file.

When you then delete the image, the image must still be kept around as the containers and VMs you created from it still exist and so still rely on blocks from the image. Once the last container or VM that was created from the image will have been deleted, Incus will automatically delete the image too.

You can indeed get rid of the image by doing an export/import, but all you’re doing through that is make your container or VM bigger by having its dataset own all its blocks rather than have the dataset only store the delta from the image. So at best you end up with equivalent usage, at worst (if you have multiple instances created from the same image), you’ll have a worse disk usage due to duplication.

Thank you for the explanation! At the same time, what happens If I create, say a build-container, with 20-ish gigabytes of data, and then delete it for some reason. Wouldn’t those used blocks remain in deleted images? If so, there would be a point where the total disk space in use by whatever containers still running, even if they owned their own blocks, might be significantly less than deleted images?I hope I don’t come across as difficult, I’m just trying to plan ahead so that I don’t make stupid choices.

EDIT: Oh, maybe I’m overthinking this. Those additional changed blocks in a deleted container is maybe gone?

Yeah, if you create a container from an image, then write 20GiB, those 20GiB will belong to the container dataset, not to the parent image. When you delete the container, they’re gone.