Share a custom image with team for testing purposes

Hi all,

I am writing a script that:

  • spin up some virtual machines
  • build an ansible inventory with their ips
  • run some ansible playbooks on them
  • makes various assertions

It’s indeed a script for testing purposes.

What I would like to do is to create the vm instances based on a custom image.

What’s the best way to do that?

My idea was to keep the custom image directly into the repository, and launch the instances from that,
is that possible?

Thank you
Cheers!

Welcome Carlo!

The images: remote has standard images for several Linux distributions.

incus image list images:

The next step to customize those images is to use cloud-init. You still use the standard images, but as these images are first launched, you auto-pass cloud-init instructions to them and they get customized. See the Incus documentation for more, or my recent tutorial at How to customize Incus containers with cloud-init – Mi blog lah!

The ultimate way to customize images is to use distrobuilder yourself to create the images on your own. The standard Incus images from images: have been created with distrobuilder and you can start off using the actual configuration files. (note: are these the actual config files?)

You may consider having a separate Incus server with the images, that share them with a separate remote:. Here is the list with your current remotes,

incus remote list
1 Like

Hi!

I read your article, thanks! I was able to follow and use cloud-init with profiles.

Problem is, I would like to have the custom image or profile as a file in a repo,
so my colleagues can run a single command and have the customized instance.

How can I achieve that without setting up a private remote?

Cheers!

Actually, my final goal is to have a ubuntu 22.04 with ssh started and
an authorized key taken directly from the repo files (it’s for testing!)…

I am not sure what service is running on the repo.

If you have launched an image and you are making changes to the instance like installing services and setting up SSH keys, then you can

$ incus list alpine
+--------+---------+---------------------+------+-----------+-----------+
|  NAME  |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+--------+---------+---------------------+------+-----------+-----------+
| alpine | RUNNING | 10.10.110.10 (eth0) |      | CONTAINER | 0         |
+--------+---------+---------------------+------+-----------+-----------+
$ incus stop alpine
$ incus export alpine
Backup exported successfully!            
$ ls -l alpine.tar.gz 
-rw-rw-r-- 1 myusername myusername 28303133 Φεβ   5 20:51 alpine.tar.gz
$ 

Now, send the exported file to your colleague or upload the file to some file server, etc.

They will then download the file on their side (in this case, alpine.tar.gz) and

$ incus import alpine.tar.gz
$ incus list alpine
+--------+---------+------+------+-----------+-----------+
|  NAME  |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+--------+---------+------+------+-----------+-----------+
| alpine | STOPPED |      |      | CONTAINER | 0         |
+--------+---------+------+------+-----------+-----------+
$ incus start alpine
$ incus list alpine
+--------+---------+---------------------+------+-----------+-----------+
|  NAME  |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+--------+---------+---------------------+------+-----------+-----------+
| alpine | RUNNING | 12.12.112.12 (eth0) |      | CONTAINER | 0         |
+--------+---------+---------------------+------+-----------+-----------+
$ 

Thank you, that doesn’t feel right though.
I will try explaining my scenario better.

We have a project P0 that contains the Ansible playbooks for our servers.
P0 is under version control.

Currently, when I change something in a playbook, I test it on a local virtual machine.
So, I wrote a script in P0 to automate the testing phase a bit.
The script does:

  1. Creates and starts some vm instances VM1, VM2, VM3
  2. Build an ansible test inventory with VM1, VM2, VM3… ip addresses
  3. Run the playbooks on those vms
  4. Make some assertions to test the playbooks worked correctly
  5. Stop and delete VM1, VM2, VM3

When I create the vm instances at point 1, I would like to be able to
set up ssh and a root key.
Right now, I am doing it with a series of incus exec ... from the script,
but if I could start the instance from a custom image that could be simpler.

The profile solution sounds ok, I can set up a test profile from the script.

What do you think?

Thanks
Cheers!

You will want to use cloud-init for that. It was designed explicitly to do this kind of thing.

https://cloudinit.readthedocs.io/en/latest/index.html

https://cloudinit.readthedocs.io/en/latest/reference/examples.html

If I find the time I can share a working example tailored to your usecase.

Thank you,

the missing piece in my head is how to glue togheter incus and cloud init to somehow put the custom image under version control so my test script can use it.

Cheers!

I believe the very simplest way would be to keep the cloud-init config file under version control and just pass it as a command line config to incus:

incus launch images:ubuntu/22.04/cloud --config=cloud-init.user-data="$(cat my-cloud-init.yml)"

Something like that.

Cheers!

2 Likes

I’ll steal that for my tutorial, How to customize Incus containers with cloud-init – Mi blog lah!

edit: In Bonus #1, How to customize Incus containers with cloud-init – Mi blog lah!

1 Like

If you need version control for devices section in addition to cloud-init.user-data section, you can simply update the whole profile from a file like that:

incus profile edit <profile_name> < /<path>/<profile_file_name>

Then

incus launch images:ubuntu/22.04/cloud <instance_name> -p default -p <profile_name>

P.S.
To create an empty profile, use this command:

incus profile create <profile_name>

Also.

$ cat cloud-simos.yml 
    #cloud-config
    runcmd:
      - [touch, /tmp/simos_was_here]
$ incus profile create cloud-dev2
Profile cloud-dev2 created
$ incus profile set cloud-dev2 cloud-init.user-data="$(cat cloud-simos.yml)"
$ incus profile show cloud-dev2
config:
  cloud-init.user-data: |2-
        #cloud-config
        runcmd:
          - [touch, /tmp/simos_was_here]
description: ""
devices: {}
name: cloud-dev2
used_by: []
$ incus launch images:alpine/3.19/cloud myalpine --ephemeral --profile default --profile cloud-dev2
Launching myalpine
$ incus exec myalpine -- su --login alpine
myalpine:~$ ls -l /tmp/
total 1
-rw-r--r--    1 root     root             0 Feb  6 11:03 simos_was_here
myalpine:~$ exit
$ 

Big question. When I add the cloud-init configuration in that way, why do I get those weird characters 2- at the end of the line? It still works though.

cloud-init.user-data: |2-
1 Like

Images are binary and have typical sizes of a few hundred MB. I don’t versioning them the same way as code. I version each image by putting it in a directory with a timestamp.
You can distribute scripts like cloud-config files, image names, etc with git and distribute the binary images with something like rsync, or https + basic-auth.

Here’s a “du -sh *” listing of a few of my images:

66M	a-base-20240203-0818
203M	a-dev-20240203-0820
39M	a-haproxy-20240203-1418
77M	a-nginx-20240203-0834
99M	a-php82-20240204-0910

Each of these is a directory containing a single file, called “image.tar.gz”, which is created by running “incus image export” with a snapshot of a configured container or VM. “a-” stands for alpine.

When I export/import the image, I use the timestamped directory name as the image alias, so it can co-exist with other versions. I have a mapping somewhere that maps plain names to aliases, so when a script needs to use the “nginx” image, it really uses the image a-nginx-20240203-0834.

I typically create custom images by just installing packages to another image. I apply any other configuration when launching the image, with cloud-config files.