When I pull in images that have a smaller base size (e.g. nginx OCI is ~60MB), the thin volume for the image is still provisioned as the volume.size. I realize since it’s thin provisioned that the excess disk space isn’t truly reserved, but exceeding the volume group size with requests eventually leads to warnings.
It’s currently possible to change the volume.size value right before importing an image to make it closer to right size, but that still requires a lot of guess work. Is there a simple solution to this that I’m not seeing?
Would it be absurd to have a volume.size: auto setting that would decompress to memory (or an ephemeral volume) then create and copy to an lvm volume from there? I wouldn’t mind looking into creating a PR for that if it doesn’t sound too crazy.
We definitely don’t want to unpack in memory as we treat images as generally untrusted and the last thing we want is to have a zip-bomb use all the system memory.
I’m also not sure exactly why having an image take more space than it needs is a problem. Normally you want volume.size to be set to the smallest instance size you’re likely to be using on the system.
That then allows for instant (no resize) cloning of images for any instance of that size, then for anything that needs more space, the filesystem or partition table gets grown to the required size.
If the image is made to be as small as possible, it will actually slow down instance creation as now every instance being created will need the extra step to grow the image to its expected size. That resize operation, depending on the filesystem could also cause some block relocation, actually causing the operation to end up taking more space on disk than if the image was already the expected size.
Those are good points, thanks for the insight! Realistically I won’t have that many images anyway, so lvm requests exceeding the available storage is almost certainly just theoretical.
I’ll just set to a reasonably small size then bump up occasionally if importing large vm images.
@stgraber Sorry to necro this topic, but I think I understand incus a bit more and was curious if you had any suggestions for deployment patterns in lieu of a “right size” option.
My default approach to container deployments is to use additional volume mounts for anything stateful, therefore my image disk + root disk for the instance can pretty much always be as small as possible. When creating many smaller container instances (e.g. nginx, cloudflared, dragonflydb), I typically have the default volume size set to 1GiB. This size is already overkill for those containers but still not an egregious root disk so I don’t mind. If I then go to create an instance based on a much larger image (e.g. ollama needs 10GiB), I now can’t unless I manually edit the volume.size in the pool first.
Copying the image is a non-issue since that stores the rootfs inside the storage.images_volume volume. Setting the instance root disk size large enough also isn’t a problem. The main friction for me ends up coming from when incus creates the base image volume for the rootfs. When creating a fresh instance without that volume existing yet, it will attempt to make the volume based on the pool’s volume.size instead of the instance’s devices.root.size. Logically that makes sense, but setting my volume.size to the greatest possible image size ever seems wrong.
Are there any methods that would allow one to create that initial base image volume in advance of creating an instance that leverages it? If not would that make sense to expose to the end user? Perhaps said method could include a --size option to let that be overridden from the pool’s default.