I have been separating user data (my data) from quest OS data in LXD containers, so that I can upgrade a container by removing the old OS and plugging in a new OS.
I put user data in disk devices that I attach to the container, at the following standard paths, and I don’t put user data anywhere else:
/opt, /etc/opt, /var/opt, /usr/local/bin, /home, /var/log
In some cases, I add extra devices. For mariadb containers, I add /var/lib/mysql and /var/lib/mysql-log (which I use for innodb_log_group_home_dir).
I have about a handful of template containers, built with specific packages and other configuration. The template containers are stopped and snapshoted after they are created.
When I need a working container $c, I make it as follows:
- I copy a template container (which just makes a zfs clone)
- I create a new zfs filesystem (or multiple ones), for the attached devices, and create a subdirectory for each disk device (opt, etc, var, bin, home, log)
- I create a $c.devices profile and I attach the disk devices to it.
- I either rsync or zfs clone each disk device from the template container (or from another “device template”) to the working container
- If I don’t use a template for the devices, I chown them to 1000000:1000000, which maps to the root user in the container, so the container can write to them.
- I attach this profile to the working container, along with any other profiles that I need (generally to attach other disk devices with application directories)
- I start the container
When I create a template container, I follow similar steps, but there is typically a lot more configuration (installing packages, running scripts, …).
A template container can be created by cloning another template container, so I have a hierarchy of templates, out of which I clone my working containers.
When I need to upgrade the OS of a working container, I rebuild the working container by deleting it and creating a new container to take its place with the same user data:
- I first upgrade the template container, or rebuild it from scratch
Then for each working container: - I stop and delete the working container (but not the disk devices in the profile $c.devices)
- I clone the template container (copy a snapshot)
- I attach the same profiles as before to the working container, including $c.devices
- I start the container
There are some other details:
- Because I replace /etc completely, I keep the changed files in /etc/opt/etc/ (for example, /etc/opt/etc/php7/php.ini)
When I recreate the container: - I rsync /etc/opt/etc/ /etc/
- I replace the new sshd host keys with the old ones, so ssh to the replaced container doesn’t complain that it doesn’t recognize it.
- I recreate any users that are not in the template container
- I generally avoid making changes to /etc. For example, I keep nginx and apache2 conf files in /etc/opt/apache2/ and /etc/opt/nginx/, and configure the web servers to include the configuration files from there.
When I replace a template container, I first rename it, so I can tell which working containers still use the old template (using zfs list -t snapshot -o name,clones on the LXD containers filesystem). Once I see that the old template does not have any clones, then I delete it.
That way:
- I know where my data is, how much space it takes, and how to port it to a new system.
- I was able to migrate containers to another distribution, though I needed to change or remap file permissions because the corresponding application uid/gid were different between the two distributions.
- I only need to backup my data, not multiple copies of guest OS instances.
- I can snapshot my data independently of the OS.
- I don’t have any hidden settings forgotten deep in the OS.
- I keep my working containers “reproducible”. I can recreate them from a brand new container and my own data.
- There is less duplication or divergence of OS files across containers, since all working containers are clones of a single template (or a few templates).
I have a program that automates these steps. It uses a yaml configuration file for each template container and working container, that lists what needs to be done to create or rebuild the container. These steps are:
- copy a template container
- create or clone zfs filesystems
- copy files from a template
- install packages (mostly for the template containers)
- attach profiles
- push files (lxc file push)
- run scripts (lxc exec sh)
- create users
- create a snapshot (mostly for template containers)
- stop the container (for template containers)