Customize container from multiple sources

Hi,

I really like using LXC/LXD, but i can’t find a solution for my setups.

The question in short: Is there a best practice to do configuration that mulitple containers share, and do additional specific config for each container?

The long story: I gonna run a couple of LXD containers with different tasks. There is a basic set of customizations, that every container has, and then for each task there are different package installations and configurations. Now I’m looking for a way, to freeze this in infrastructure code.

Initially I thougt about using distrobuilder to build a complete image for every task. That would mean, that I have to duplicate the whole image configuration and the basic steps in each *.yaml configuration file. Then I thought about cloud-init, but I couldn’t find a way to do includes there in a LXD container either. Next idea was to set the basic customizations as cloud-config in the profile in user.user-data, and for each container apply a cloud-config in user.user-data in the container config. but the container config overrides the one from the profile.
I could do the basic steps by building a custom image, use that image for every container and do the task-specific steps in cloud-init. That would mean, that I use diferent approaches for preparing the containers (distrobuilder yaml for the basic image, cloud-init config for the specific configuration), that also kinda smells.
Another solution would be to skip the configuration on that layer entirely and just do it in Ansible. With Ansible comes the need to do something after lxc launch foo: bar, but I want the images to be complete and self contained, just start it and be done for starters (nope, not gonna use docker :D).

Any thoughts about this?

1 Like

Hi!

It looks like you have researched most options.
Have a look at bravetools (https://github.com/bravetools/bravetools). They let you create a images with your very specific configuration.

Thanks simos! I had a look at it. Looks interesting, but I couldn’t find a way to include common steps in multiple image configurations.

I think I’ll try to realize all the approaches I described above, at least for a handful of the containers, to see if the duplication of infrastructure code is that stinky as it sounds. Maybe I wrap some kind of generator around, to stick the yaml parts together.

I also thought about having a look into the distrobuilder or even the LXC/LXD source if I can contribute something to make merging of cloud-init configs or includes in the distrobuilder config possible, but Go is one of the enemies I didn’t tackle yet, so I’m a bit lost in this area.

Hey cweilguny - I’m one of the contributors to Bravetools (thanks simos for the shout out). You can manage multiple LXD containers for different tasks using configurable Bravefiles, which describe both package installations and some basic container configs (cpu, ram, nesting).

You could then reuse these configurations to launch multiple containers using brave deploy command with custom service configs for each container. This ensures that your source configuration is not duplicated.

Here’s an example Bravefile of one of our environments: (https://github.com/beringresearch/bravefiles/blob/master/ubuntu/ubuntu-bionic-mlbase/Bravefile)

There’s some more in-depth docs here: https://bravetools.github.io/bravetools/docs/bravefile/

Hi bering_team, thanks for your hints!
Aside from Bravetools, your whole project looks interesting. Building an own, solid toolset to support the daily work and making it public is a mindset we need more of. Maybe it’s that fact, that finally drives me, to try out Bravetools :smiley:

Finally I found a solution that makes me happy. I hacked together a small script that can include plain text into other files using parameters. That way I can separate repeating parts into partial files and also give parameters to the include, which are replaced in the included file. Kind of a poor man’s template engine.

Using this, at first I built LXD images from YAML files sticked together with this script. But building all images took nearly an hour. As the ARM platform came into play with some Raspberry Pis entering the stage, I would have to build some of them on an ARM platform too. So I went for cloud-init instead of building complete images, using the same approach. Create partials for repeating parts, and stick them together in a cloud-config YAML for each container setup using the template engine script. Then I added a script that creates LXD profiles for all the generated cloud-config files, and also a dedicated script for each container that is thought to stay, which create the container using the right profile, add resources and set configs. Those scripts are deployed on a glusterfs volume mounted on all hosts that need them. So I can create my infrastructure using this dedicated scripts, but I can also just throw up a container with one of the profiles for testing or other usages.

It’s already pretty clean, but it can be optimized and cleaned up more, and it needs some kind of documentation. The poor man’s template engine is currently a PHP script, I’m sure there is something out there, that can do the same without a dependency like PHP. I’ll place it in a public Github repository as soon as it’s free of dependencies like PHP and usable for others.

1 Like