Using templates to customise containers


I’m running Ubuntu 16.04 and the LXD that comes from the repo (2.0.11). It works nicely and this template is working:

{% with filepath="/var/lib/lxd/containers/"|add: |add:"/templates/conf-files" conf="jvm.config" %}
    {% if config_get("environment.env", "") == "prod" %}
            {% with filename=filepath|add:"/prod/"|add: conf %}
                    {% include filename %}
            {% endwith %}
    {% elif config_get("environment.env", "") == "staging" %}
            {% with filename=filepath|add:"/staging/"|add: conf %}
                    {% include filename %}
            {% endwith %}
    {% elif config_get("environment.env", "") == "test" %}
            {% with filename=filepath|add:"/test/"|add: conf %}
                    {% include filename %}
            {% endwith %}
    {% endif %}
{% endwith %}

What this does is putting java config in place depending on the environment (prod, staging, test) by generating the path and including the resulting file from a sub-directory (/var/lib/lxd/containers/container1/templates/conf-files/staging/jvm.config for staging; behind the scenes that file is actually just a symbolic link to ../jvm.config-test). This is working fine.

However, when I try this on Ubuntu 18.04 with the apt-version of LXD (3.0.2 to my understanding) this does not work. It fails on the include statement and with the log message “No such file or directory”.

OK, apparently something has changed. The (absolute) path generated is correct but the file is not found anyway eventhough it’s there.

How can I now find my file? My hard-coded path starting at / is probably rather clumsy, but at least it worked. Could I do it in a better way?

EDIT: one difference that now came to my mind, is that I’m using the “normal” file system on 16.04 and ZFS on 18.04. Maybe that matters?



OK, trying something else as nobody has replied to this yet: cloud-init.

I have never used cloud-init before, but it seems easy to try runcmd and create a file in /tmp.

This trivial example fails for me however. The user.user-data, provided in a profile, seems to be inside the container:

root@container1:~# cat /var/lib/cloud/seed/nocloud-net/user-data 
#package_upgrade: true
#  - build-essential
#locale: es_ES.UTF-8
#timezone: Europe/Helsinki
    - [touch, /tmp/simos_was_here]

The cloud-init-user.tpl is run which I guess the above is proof of.

What have I missed as there is no file in /tmp in the container?

Can you show the profile? lxc profile show myprofile

I just tried with LXD 3.7 (snap, not the LXD 3.0.2 you tried) and it worked for me.

Hi, thanks for answering. Here’s’ the used profile:

skade@venusbeta1:~$ lxc profile show container1
  environment.env: staging
  environment.fornyelse: "false"
  limits.memory: 2GB
  limits.memory.enforce: soft
  user.user-data: |
    #package_upgrade: true
    #  - build-essential
    #locale: es_ES.UTF-8
    #timezone: Europe/Helsinki
      - [touch, /tmp/simos_was_here]
description: Configuration for venusbeta/container1
    path: /var/www/html/cfapps
    source: /home/skade/lxd/html/container1/cfapps
    type: disk
    path: /var/www/html/common
    source: /home/skade/lxd/html/container1/common
    type: disk
    path: /var/www/html/venus
    source: /home/skade/lxd/html/container1/venus
    type: disk
name: container1
- /1.0/containers/container1

I tried with this profile and it worked for me. I got the file in /tmp/.

Specifically, I edited the devices section so that I can create the container on my computer by specifying this single profile. It worked and I got the file in /tmp.
Then, I simplified the profile to have only the cloudinit instructions, and then I launched a container by specifying --profile default --profile cloudinit (just in case the cloudinit section does not work when you merge profiles). It worked here as well.

One thing you can check, is whether the cloud-init fragment in the profile has any weird characters like tabs.
Because the user.user-data: | part is supposed to be raw commands for cloud-init.

If all fail, then you can simplify your profile to match my blog post so that you can make sure you got something that works.

If I launch a new container it works on both 16:04 and 18:04:

lxc launch ubuntu:18.04 ctest --profile default --profile cloud-config-staging

cloud-config-staging looks exactly like in your blog post.

When applying that profile on my existing container, even without any other profile (except default) it does not work on 16.04 nor 18.04. The existing containers were also originally launched with the launch statement above (but the 16.04 container “some” time ago…).

From the original container I have published and launched several newer versions of the image, but surely that shouldn’t affect cloud-init???

What am I missing?

An alternative would of course be to just jump to the snap-version of LXD everywhere, but I’m sure that would not be straight-forward either and many things would have to be re-thought.

By default, cloud-init runs only once, when you first launch/start a container.
If you want to rerun cloud-init when you restart the container (or even without restarting the container), have a look at


Sorry for my late response.

Deleting the cloud-init stuff within the container indeed helps. The problem now is of course: how can this cleaning be done during launch, before cloud-init is done? The cloud-init cleaning cannot obviously be done in the cloud-init process as that is not run unless the cleaning is already done… Maybe hacking with templates (and forcing some scripts to be run) would somehow make this possible, but there must certainly be a cleaner way of dong it?

I would like the cleaning to occur every time I launch my image (= on container creation), because during launch I decide if the container will be used for test, staging or production and the usage requires different versions of some configuration files.

Any thoughts?

If you are relaunching the container multiple times and you want cloud-init to run every time, then you should take a snapshot of the container when you first initialize it (lxc init ubuntu:18.04 mycontainer). Then, after the first launch, restore back from the snapshot and relaunch for the subsequent times.

OK, thanks, I have never used snapshots… never understood how I could make use of them…

This is how I use LXD images and containers and why I relaunch images:

I launch ubuntu:18.04 (for example) and install all the stuff I need including correct server configurations (the configuration is tuned using templates). Once the container is working I stop it and publish it. That image can then be launched elsewhere to create a working server environment (web application). I launch it locally for development purposes and remotely for testing purposes… and soon maybe for production use. The same image.

When there is some kind of update, e.g. server software update, I launch the image, install the update(s), and publish it again. That way I always have an up-to-date image which can be launched again.

This is why I launch images multiple times. Maybe my usage is not optimal.

I would like “everything” to happen inside the container, i.e that there would not be much dependencies on the host itself (as few settings as possible in profiles). This is possible to accomplish using templates, but if there are much differences that is very cumbersome.

I’m not sure cloud-init is the best solution to my problem, but it’s certainly one. Scripting might be another solution.

I have to sleep on it…

Now I have slept on it and I ditched the cloud-init approach and went for the template+script solution instead.

This means that I have two templates:

  1. One for a script that copies configuration files based on the environment (setting set in a profile) and
  2. one for a file that “triggers” the above script if it exists.

When my main application server starts it checks for the “trigger file” (in the SystemD ExecStartPre) and if that exists it runs the script (the trigger file is deleted in the script so the setup script is run only once). This means that the trigger file only exists when the container is launched and thus the setup script is only run at container launch, exactly as I wanted it to be.