How To Add a Certificate To Incus Remotely

I recently discovered the lxd license issue few weeks ago and I have been trying to migrate to incus, unfortunately, some of what works in lxd doesn’t work in incus, for example, adding certificate by using the trust password doesn’t work no more.

I use that functionality to add certificate remotely from my app as everything has to be done that way, here is the previous way I was doing it and I even had a topic about it:

For example, here is how I was doing it with lxd:

{
  "certificate": "X509 PEM certificate",
  "name": "castiana",
  "password": "blah",
  "type": "client"
}

replacing it with the following so it can adhere to incus is not working:

{
  "certificate": "X509 PEM certificate",
  "name": "castiana",
  "trust_token": "blah",
  "type": "client"
}

What is a sensible alternative to get this working for me remotely for incus, help please?

Incus doesn’t have persistent trust passwords (core.trust_password) as that was quite an unsafe mechanism, the password often being unchanged for long periods of time and susceptible to brute force attacks.

Instead you can generate one-time trust tokens through incus config trust add.

The other option is to directly have the client’s certificate be added to the server either locally through incus config trust add-certificate or by an existing trusted client.

1 Like

Ouch, that is bad :frowning: as there is a very good use case for it.

The idea here is that once the server is deployed, I can’t access it no more, so the only way to communicate it would be through incus remotely, currently investigating: incus config trust add-certificate and see how I can adjust it to support that.

I would update if it works as expected, thank you :wink:

Went with the incus config trust add-certificate option, I added the certificate immeditely the instance is deployed and that seems to work fine, thanks once again.

For future readers and future self:

The reason I asked this question was because I wanted to do everything remotely, my VPS provider have a way I can run a script on server deployement, so, I was able to use input variables to pass the client cert with the API of my VPS Provider, in Linode, it is called StackScript and AWS, it is called user data I think, good luck!

Yeah, for a VPS with something like cloud-init, you can easily enough embed the public part of the keypair (.crt) and then have that be added through incus config trust add-certificate during deployment.

This is better from a security standpoint as if your cloud config data gets leaked somehow, nobody can really do anything about it, all they see is that your certificate is trusted, but without the private key, they can’t do anything about it.

With your previous approach, the trust password being potentially leaked would have allowed anyone seeing it to get full admin access on the server.

1 Like

I agree 100%, thinking about it now, the trust password is a disaster waiting to happen, to be honest. Sometimes you never know until you try, thanks for your hard work and answering the community questions, you are goated!

Another trouble waiting to happen, as I was doing QA on my project, I notice that I set the certificate to expire in a year:

+-----------------------+--------+-------------+--------------+----------------------+
|         NAME          |  TYPE  | DESCRIPTION | FINGERPRINT  |     EXPIRY DATE      |
+-----------------------+--------+-------------+--------------+----------------------+
| incus-client-cert.txt | client |             | 478820c71292 | 2025/05/18 08:53 UTC |
+-----------------------+--------+-------------+--------------+----------------------+

I have corrected this by using a longer year, 30 years, while that works for now, I needed something better (e.g, compromise, expiry, etc), recall I said, I do not have access to the VPS, everything is done remotely.

My strategy is to have a ridiculous small container (alpine, perhaps) that would serve these purposes, for example, it would renew the certificate, update the host system, etc.

The little container would be like an agent. From the agent container, I would push a file that contains a script to the host and setup a way for the host to automatically run the script, and delete it once done, rinse and repeat.

Is this something that makes sense or do you have any other suggestions?

Presumably you have ssh access to the VPS though? Otherwise, how did you migrate from lxd to incus, and how did you create your initial client cert?

I find that the trust certs incus creates for me have a 10 year lifetime. But as long as there is ssh access, they are easily replaced.

I don’t see much point in re-signing new certs at the server side automatically. The client would still have to fetch them somehow, in order to present them the next time they authenticate. In any case, when issuing a new certificate it’s good practice to create a new private key at the client.

compromise, expiry

In the event of compromise, you’d just remove the cert from the list of trusted certs.

With a PKI, in principle you could issue a revocation, but you’d still have to distribute that to all the endpoints that trust it. Since incus only trusts a specific list of certificates, rather than all certificates signed by a particular CA, it’s easier just to remove that certificate from the trust list.

Thanks for your response.

No, there is no SSH Access to the VPS (this is deliberate), everything is done remotely through the API incus provides, this is a new project so no need migrating, and the initial client cert is created on server deployment like I described in previous replies above.

That is not what I mean, the compromise is in the event that the private key is stolen or any strange anomaly that needs purging the certificate, this is a user facing project where I really don’t have much say on user server once it is deployed (aside using the incus API), you get? and sure, a new private key would be created when issuing a new cert.

and no, I can’t use a PKI for obvious reason.

Anything you do with the incus CLI goes over the API already.

Set up (say) your laptop as a “management” machine with the incus client, “incus remote add …” and configure an initial certificate - it seems you have done this already. Then any other work, including adding or removing certificates, can be done via this same command line tool.

incus config trust list foobar:
incus config trust remove foobar:abcd1234
... etc

In case of compromise of a private key: use the CLI to add a new cert, then remove the cert relating to the compromised key.

Seems there is a misunderstanding here, I can’t manually do anything on the server, once the server is deployed, I only have access through the REST API, this: Incus Main API specification

I came up with a workaround:

On Server Deployment, I launch an alpine image: incus launch images:alpine/3.19/amd64 agent-container

Then created the following SystemD Service:

[Unit]
Description=Auto Run Scripts Service
After=network.target

[Service]
Type=simple
ExecStart=/bin/bash -c 'mkdir -p /root/scripts && for script in /root/scripts/*; do [ -f "$script" ] && /bin/bash "$script"; done && rm -f /root/scripts/*'
Restart=always
RestartSec=300s
SyslogIdentifier=auto-run-scripts

[Install]
WantedBy=multi-user.target

Whenever, I need to do anything on the host system, I create a script file on the agent container and drop it in the scripts folder of the host system (all this is done though the REST API).

I do not need to do anything else, once every 5 minutes, SystemD would look for script files in the /root/scripts folder, execute it and clean up once it is done executing.

Note: This is only for critical situations and not to get locked out in case of emergency, and it isn’t exposed to the end user of my application.

A good use case is lxd to incus, imagine if lxd is already running and I want to migrate to incus or if something breaks on the host system, I can use this strategy, more like a last resort.

Where - on the host? In that case you’re administrator of the host.

You can do that using just the incus API, without actually touching the host. For example:

  1. create a share between /etc/cron.d and a privileged container
  2. create a file in the share and have it run on the host

I’m doing the following from my macOS laptop - which cannot run incus containers, but has the incus client (brew install incus). It has several Linux hosts added as remotes, the one I’m using here is “nuc3”

% incus create images:ubuntu/24.04/cloud nuc3:testct -c security.privileged=true
Creating testct
% incus config device add nuc3:testct cron.d disk source=/etc/cron.d path=/mnt
Device cron.d added to testct
% incus start nuc3:testct
% incus exec nuc3:testct -- ls /mnt
e2scrub_all  sanoid  syncoid  zfsutils-linux
% echo '* * * * * root echo hello >/tmp/p0wned' | incus file push - nuc3:testct/mnt/testing
% ssh root@nuc3 ls -l /etc/cron.d
total 20
-rw-r--r-- 1 root root    201 Jan  8  2022 e2scrub_all
-rw-r--r-- 1 root root     51 Mar 21 13:54 sanoid
-rw-r--r-- 1 root root    511 Mar 21 13:55 syncoid
-rw-rw---- 1  502 dialout  39 May 18 15:47 testing
-rw-r--r-- 1 root root    377 Jun 23  2022 zfsutils-linux
% incus exec nuc3:testct -- chown 0:0 /mnt/testing
% incus exec nuc3:testct -- chmod 644 /mnt/testing
% ssh root@nuc3 ls -l /etc/cron.d
total 20
-rw-r--r-- 1 root root 201 Jan  8  2022 e2scrub_all
-rw-r--r-- 1 root root  51 Mar 21 13:54 sanoid
-rw-r--r-- 1 root root 511 Mar 21 13:55 syncoid
-rw-r--r-- 1 root root  39 May 18 15:47 testing
-rw-r--r-- 1 root root 377 Jun 23  2022 zfsutils-linux
<< wait a minute >>
% ssh root@nuc3 cat /tmp/p0wned
hello

What does this tell you? That if you give full remote incus access to someone (without locking features like shares and privileged containers), you’ve effectively given them full access to the underlying host. Which seems to be what you want.

But I still don’t see why you want to use incus in this way for remote administration of the host, rather than enable ssh, which is a lot more robust way of going about it than relying on backdoors.

Installing incus, add client cert, and the SystemD service are done on server deployment, then the rest is done via the REST API.

I understand what you meant, but this is deliberate, and everything is intentionally locked down by default unless required, e.g. SSH, however, there is also an option to enable ssh:

The agent container is not really a backdoor per se, it is intentional, whoever is creating the container is also the one controlling it. and it would only be used once in a while. This is just trying to reduce the attack surface and minimizing risk.

Edit:

So, there are several options user can use, go with SSH, or without, however, by default, everything is done from the REST API, a bit complex than that, but hopefully you get the gist now