OK so I have recreated this. And it occurs even if the container that was exported has its own disk device in its config (i.e not from the profile).
The issue is that container export files have to be unpacked (into their target storage volume) before we can access the backup.yaml
file that contains the config of the container (e.g. the disk device size
property or which profiles it belongs to).
So this becomes a chicken & egg situation where we need to create the volume before we know what size it should be. This also becomes more complicated with “optimized” exports, where the container’s filesystem is stored as a binary blob that is specific to the storage driver.
The situation is better for VM imports, because we store the VM’s disk as an image file in the tarball, and so when we get to unpacking it, we can directly compare its size with the target volume and enlarge the target volume if needed (which we do).
With standard filesystem imports we would in theory need to read through the entire tarball adding up all the file sizes from the meta data and then resize the volume, and then read through the entire tarball again to actually extract the files onto the volume.
Either that or we need to start storing something in the top-level index.yaml file (which is stored in the tarball first and is quick to access) so we can just read that file and recreate the volume at the correct size.
There is also some provision to do this, as the index.yaml file structure has a config
field which could be used to store an embedded copy of the instance’s backup.yaml file, but which is currently only used to store info about custom volume exports.
@stgraber could this be something for the roadmap perhaps?