Conceptual question about container and image updates

Hi there,

This is a conceptual question because I am not sure that I have understood this correctly.

So if I keep my own image server, and I set the local copies of images as auto-update, I can verify that indeed the local copies of the images are updated to the latest version.

However, I thought that, being the containers’ file system mainly hardlinks to the image files, if the image is updated I thought that the container would be also updated.

As I first understood this image updating concept I thought that it would be great use case for, e.g. keeping many containers linked to a single image, then update/upgrade the image OS and all containers would be updated, too.

But I have done a couple of tests, and yes, the images are updated, but some how the links from the containers get lost, so they become independent.
That means that if I have 20 containers running ubuntu 18.04 , I would have to run apt-get upgrade in each one of them?

Any clarification/confirmation would be great. Maybe I am doing something wrong? I am using aliases for referencing images.

Many thanks, best

Hi Javi,

I’m not sure where you got the information that “the containers’ file system [are] mainly hardlinks to the image files”. To my knowledge that is not the case. You might have copy-on-write behavior in a container’s file system if the container is backed by a storage pool that supports cloning (such as zfs or btrfs). However in the very moment you create a container, you’re “forking” the file system of the image you use as base, effectively making the container file system and the image two completely independent things.

So yes, if you have 20 containers created from some image based on 18.04 you’ll have to apt-get upgrade them separately. You might consider using Landscape or the unattended-upgrade package for that use case.

Hope that helps.

Hi @freeekanayaka,

thank you very much for answering.

I got the information about the container files symlinked from the actual running system.

If you list the inodes of a container you can see that they are hardlinks to the same inodes of the originating image.

ls -i /var/lib/lxd/containers/containername/rootfs/etc
ls -i /var/lib/lxd/storage-pools/default/images/e00d3c5b876febac12ffc272a.../rooftfs/etc

Once you modify a file inside that container, the link is broken and the container gets its own inode for that file.
This is how lxd containers space is not “real”. They only need as much actual disk space as differences there are with the base image.

I thought that this mechanism would allow for updating the image-> then get the container updated, which would be very convenient to avoid needing to upgrade containers individually.

If you just think about how docker works (you update the image, but keep the data) it would be a great feature to have.
On the other hand, if you have to upgrade each container individually it is a defacto waste of space and resources.
Just each one of them has to download the packages, unzip them, run the upgrade procedure, etc.
So you start with a very thin image that takes up only about 100MB actual disk and just because of updates it grows and grows easily up to 1GB.

Again, thank you for your help.