Incus image list columns

Add flag --all-projects
incus image list --all-projects

When features.images: "true" is set for projects, than need to repeat
incus image list --project x/y/z

An --all-projects flag would help getting an overview of all images in one run.

Add image list column ‘Used-by’
It helps finding zombie images (unused) for the purpose of clean up.
Right now I have to list containers include column Base Image, reverse compare with image list --project x, y, z …

I’ve filed a feature request for the first part (–all-projects).

The second part however doesn’t really make sense as unlike other resources that can actually be used by something, that is, cannot be deleted without the thing also having to be deleted, it’s not the case with images.

Images can always be deleted and so the used-by field doesn’t currently exist on them nor makes sense as it stands.

As you mentioned ,we do have a volatile config key to keep track of what image fingerprint was used to create a particular instance. This allows for slightly faster instance copies between servers in some limited scenarios (mostly rsync with dir backend and containers) but if the target server doesn’t happen to already have the image in question, it won’t prevent the migration.

Nice. It serves a better overview in one list.

I am aware of that, but correct me if I am wrong.

By deleting an image, if it still is used as backing for an instance (Base Image) it will be moved from zpool/images to zpool/deleted/images.

Though if that image was just a ghost hanging around, using space, it will completely vanish and free up the zfs space. Is it right?

When images are offered from fresh daily builds, it is not necessary keeping the old ones around.
By time storage occupancy turns significant.

I would appreciate a suggestion how to achieve freeing up / deleting old unused images occasionally.

That’s an artifact of the storage driver being used.
The behavior you’re describing is what happens with ZFS and Ceph, not with any of the others.

For both ZFS and Ceph, this is effectively us having to do reference tracking for a little while, but as soon as the reference count hits 0, the image is fully deleted. When the image is moved to deleted/zombie, it’s also just the ZFS/Ceph side being kept around, the image files themselves (/var/lib/incus/images/) are gone.

It’s also worth noting that configuration options exist on the storage pool for those drivers to use an alternative creation method (send/receive instead of clone) which doesn’t result in the original dataset having to be kept around until the last descendant is gone.

Thanks, I will try that.
Zfs send/receive is default for remote node, but am afraid on same host (zpool), send/receive might be much slower than cloning.

Yeah, it’s definitely a fair bit slower and wastes a lot of space, that’s why it’s not on by default :slight_smile: