Synergies between snaps and LXD containers

I see some similarities between snaps and LXD containers. Both entities package software so that it is somewhat self-contained and isolated from other software in the same host. Could there be some synergies between them?

One synergy I can think of is that if a snap is installed on an LXD host, it could be attached to any LXD container, like a volume or a disk device. The snap will be usable in the container as if it were installed there. But there would only be one physical copy of it on disk and it would be downloaded and updated only on the host.

Yeah, it’s unfortunately not quite that easy.

Snaps get refreshed. The refresh schedule is influenced by what processes are currently running, so now we’d need for the host system to also keep track of all usage of the snap inside of containers before processing with a refresh. It would also need some kind of way to prevent the containers from starting the snap mid-refresh.

Snaps also need to generate specific apparmor and seccomp policies based on what’s made available to them. That’s system specific and depends on the OS and interfaces connected. So you’d still need snapd running in all of the containers and need to have some interaction between the host snapd processing the refresh and the containers snapd that need to re-generate those security profiles.

What may make sense is to allow for /var/lib/snapd/snaps to be shared between host and containers, though there again, some collaboration between the different snapds would be needed as you don’t want any of them to delete a .snap that’s in use by one of the instances.

Overall, that’s likely just too much work and complexity to save a few hundred MBs.

Another synergy would be to launch a container from a snap, effectively using a snap as an lxd image.

So I could run a snap in its own container, using the LXD isolation mechanisms. That would be on top of whatever isolation snap uses already, which isn’t clear to me. If I want the snap to access host files, I would explicitly share these files with the container using lxd devices.

And yes, refresh causes problems, but I’m just exploring possibilities. Updated LXD images also cause problems. When you launch a container from an image and a new version of the image gets published, how do you incorporate the new image to the existing container? I typically rebuild the container with the new image, while keeping any other user/data files as separate devices.

I’m new to LXD so I’m still learning. Among other things my previous setup had a small number of containers (50-70, depending on tasks) running across two VMs. Moving to LXD I have run up against the potential negative impact of using ZFS for Docker hosts so I’m considering my migration strategy.

@votsalo 's idea would be very interesting and would streamline my current experiments around creating Linux containers to see if it is viable to natively run small groups of functions previously held in individual Docker containers. In effect, if you could build a container to run a Snap you could make something somewhat analogous to Firecracker/Kata or the existing (not very numerous) Ubuntu appliances - better logical isolation than a Docker container, lighter than a full VM.

The Adguard appliance is an example usecase - it’s a Snap, it’s an Appliance, perhaps it would be tidier to merge the appliance bit with LXD?

On the topic of ZFS and Docker, you may want to keep an eye on the development of LXD’s ZFS block mode support which would allow using a container (or custom filesystem volume) as a ZFS block volume, with its own filesystem ontop (such as ext4).

This would retain most of the benefits of ZFS (snapshots, migration etc) whilst avoiding the issues that Docker has with running overlay2 ontop of ZFS (as the actual Docker volume’s filesystem would be ext4 onto of a ZFS block volume).

Thanks, I was peripherally aware of this through concerted lurking for the past month or so. I guess the trouble is the timing and what form it takes (as mentioned in that thread, it would be ideal to define the ZFS block volume as part of a storage profile for a VM or similar). I am for the moment straddling two virtualisation platforms and I want to get the migration complete, and ideally not have to build a couple of VMs in one way with mitigations for all the Docker issues and then a second time using a ZFS block volume. Better to pull the bandage off in one go and move to an arguably better logical segregation model, was my thinking (rightly or wrongly) - no pain no gain.

It was during this process I was looking at an Adguard container migration, built a small Ubuntu container, installed snapd, and installed the Adguard snap on it. I was left not knowing the difference (from a user perspective) between that approach and deploying the Adguard Appliance. I feel there are potentially hundreds of Snaps that would make worthwhile appliances and the difference between a container with a Snap and an Appliance seems pretty arbitrary and opaque. Finally, a grand total of six rather random Appliances with no clear strategy is unhelpful, and maybe this might be a way for Canonical to pull several threads together into something coherent and beneficial.

Yes, exactly, in the long term, these constructs could be unified.

For the present, perhaps you can already use LXD containers as application containers.

I do something similar by separating the OS from application data. The OS is a container. The application data is a separate filesystem (or more) that I attach to the container. When I need to upgrade the OS, I delete the container, I rebuild it, and I reattach the data.

This way I can backup and restore my data independently of the container. I consider the containers expendable/replaceable and I don’t even back them up. I backup the zfs filesystems with the application data. When I need to migrate my containers to a new LXD host, I copy the data filesystems, and rebuild the containers on the new host.

I prefer using alpine containers for this, whenever possible, typically for website software, but not for databases. Isn’t alpine the preferred base of docker containers? Thanks to docker, alpine has up-to-date versions of many software packages.

I don’t do OS updates on alpine containers. When a new alpine version comes out, I delete my old alpine containers and rebuild them, starting with the latest alpine image. More precisely, I rebuild a template container with the packages and configuration I want for a particular type of application (e.g. nginx, or apache+php), I make a snapshot of this template container, and I then replace each website container by deleting it and cloning it from the template snapshot:
lxc copy {template}/{snapshot} {container}

I do some additional things that bring the OS to the same state as before: I retain the host ssh keys and recreate users (typically the user I use to ssh to the container). I restore certain service initialization files, etc. I make sure the new container has the same local ip address as the old container. I do this by preserving the hwaddr of the old container and configuring it in the new container.

Software packages typically mix their files with user files, some more than others. I modify their configuration a bit to completely separate my application files from the software package files.

For example, the nginx alpine package expects users to put their nginx configuration files in /etc/nginx/http.d/
I typically add a file there with this contents:

include /etc/opt/nginx/*.conf;

I have arranged for /etc/opt/ to be an attached LXD device, separate from the container. It belongs to a zfs filesystem outside of LXD. I do the same with /var/opt, /opt, /usr/local/bin, and /home. These directories are typically empty or don’t exist in an OS image. I put my files there to keep them separate from the OS.

I also attach a second zfs filesystem to /var/log, so that I preserve the OS logs when I replace the container.

Going back to the nginx container, I put application files in /var/opt, configuration files in /etc/opt, custom binaries in /usr/local/bin (or in separate read-only filesystem that I attach to multiple containers). There are some OS files that I want to preserve (such as host ssh/rsa key files or custom service start-up files in /etc/init.d/). I make copies of these files in /etc/opt/etc/ or /etc/opt/copy/ When I rebuild the OS, I copy /etc/opt/etc/ to /etc/ or /etc/opt/copy/ to /.

I have done this with debian containers too, but not as much. The steps to do this are different between the two OSes.

The procedure that I described requires a lot of automation on top of LXD. For each container template, I have template configuration file that specifies how to build it from scratch, or from another template, including creating and attaching disk devices, installing packages and files, running scripts in the container, creating users, etc. I also have an instance configuration file (same format as the template configuration file) with instructions for how to create a container from the template. I use this second configuration file to create multiple application containers (instances) from the template.

Here’s a previous post that I wrote for this: