Image + SSH problem

I’ve built a VM image, starting off the Debian Bookworm template, with distrobuilder. In particular, I installed an ssh server and added a systemd unit for ‘firststart’ which would re-generate the keys, with the intention that every VM would have their own SSH keys. However, when I launch the second VM from this image, I still get a message that I already saw the SSH host key - from the first VM which launched from the image, as if the script hadn’t run. I have already checked, the script, the systemd unit etc. are all there. The systemd unit looks like this:

[Unit]
Description=Generating new SSH host keys
ConditionFirstBoot=yes

[Service]
Type=oneshot
ExecStart=/bin/sh /usr/local/bin/regenerate-openssh-keys.sh

[Install]
WantedBy=multi-user.target

I have also activated it:

/path/to/input/etc/systemd/system/multi-user.target.wants                                                              
$ ls -l
total 4
lrwxrwxrwx 1 toni toni 21 Mar 15 22:58 firststart.service -> ../firststart.service

I’m running the latest incus-stable from ‘zabbly’ on a bookworm machine.

Probably worth checking that the unit is indeed in the VM and that it was started on boot. Also that it’s running before sshd starts, otherwise sshd may have started with the old keys.

systemctl status firststart and journalctl -u firststart -b0 should be useful here.

Thank you very much! I’ll check again and report back. FWIW, it shouldn’t matter too much when the script runs exactly, just during the first boot should be enough, or am I missing something?

#!/bin/sh

/bin/rm /etc/ssh/ssh_host_*
dpkg-reconfigure openssh-server
systemctl restart ssh

Yeah, that should be fine since you’re calling restart.

Might be that setting the noninteractive frontend to dpkg-reconfigure explicitly does the trick.

dpkg-reconfigure -fnoninteractive openssh-server

This is the result of running systemctl status firststart:

firststart.service - Generating new SSH host keys
     Loaded: loaded (/etc/systemd/system/firststart.service; enabled; preset: enab
led)
     Active: inactive (dead)
  Condition: start condition failed at Mon 2025-03-24 23:48:20 UTC; 2min 54s ago
             └─ ConditionFirstBoot=yes was not met

Mar 24 23:48:20 i-t-1 systemd[1]: firststart.service - Generating new SSH host keys was skipped because of an unmet condition check (ConditionFirstBoot=yes).

But I don’t understand, this is the first VM created from a newly baked image which incorporates the improvement suggested by novalix, and the first time this VM has been started:

incus launch local:599f28acfa99  testvm-1 --vm -d root,size=5GiB -c 'security.secureboot=false' -c 'limits.memory=512MiB'

So, it’s the first start of the VM, but the condiition ConditionFirstBoot is still false.

I’d like to solve this problem with the correct systemd incantations, not with eg. creating a flag file or something and test for that in the script.

Maybe some bad interaction with ConditionFirstBoot=true does not fire if /etc/machine-id is present but empty · Issue #8268 · systemd/systemd · GitHub ?

Could be, though I vaguely remember us having issues with other software on missing /etc/machine-id…

I don’t know how common ConditionFirstBoot is on Ubuntu systems.
The way I’ve done this a few times is by having a self-deleting unit, so runs and then upon success deletes itself, leaving a clean initialized system.

I just checked, distrobuilder creates an empty file for machine-id, which is most likely what’s tripping up systemd. In the running system, there’s a UUID in the file. I’m not sure how this is supposed to work with eg. a root partition that’s read-only, but hey.

Maybe distrobuilder can generate this file with the contents of “uninitialized”?

See machine-id

under “First Boot Semantics”.

Creating the image with /etc/machine-id containing ‘uninitialized’ actually worked. File a bug against distrobuilder?