Unable to create a new cluster

Hi to all!
First of all I highly appreciate the great work done with Incus. It’s such a great step forward for me coming from plain LXC and libvirt.
I’m currently experiencing, that I cannot form a cluster with the currently available versions of Incus (independent if using 6.0.x, Debian Backports or stable branch). I’m always using fresh installs of Debian 12.7. System Time is synchronized across the nodes to-be-joined. I followed the instructions from this URL: How to form a cluster - Incus documentation
Unfortunately I’m constantly getting this error: Error: Certificate fingerprint mismatch between join token and cluster member "<IP of Server>:8443"
Do you have any suggestions for this issue? Shouldn’t be a issue for the cluster to form, if both nodes are in a VM?

Thank you!
Mario

Welcome!

A cluster is a somewhat advanced concept and it’s good to setup a playground to test it out a few times.
If you have an installation of Incus, then you can create three VMs (minimum requirement for proper cluster setup), and in each install Incus as a cluster. That means, you install Incus in the first VM as the first node (bootstrap) in the cluster, then do the special process to add the next two nodes.

Let’s do this!

$ incus launch images:ubuntu/24.04/cloud node1 --vm
Launching node1
$ incus launch images:ubuntu/24.04/cloud node2 --vm
Launching node2
$ incus launch images:ubuntu/24.04/cloud node3 --vm
Launching node3
$ incus shell node1
root@node1:~# sudo apt install -y incus zfsutils-linux 
root@node1:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=10.10.10.164]: 
Are you joining an existing cluster? (yes/no) [default=no]: 
What member name should be used to identify this server in the cluster? [default=node1]: 
Do you want to configure a new local storage pool? (yes/no) [default=yes]: 
Name of the storage backend to use (dir, zfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: 
Size in GiB of the new loop device (1GiB minimum) [default=5GiB]: 
Do you want to configure a new remote storage pool? (yes/no) [default=no]: 
Would you like to use an existing bridge or host interface? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: 

root@node1:~# incus cluster add node2
Member node2 join token:
eyJzZXJ2ZXJfbmFtZSI6Im5vZGgyIiwiZmluZ2VycHJpbnQiOiI5MWJkZTMwODE1YmM5NDZmZWQ3ZWViYzY5MTIyMWZkODRkZTYzMDY4MDBkZTk4NjdiM2VlN2E2OWM1ODdkNzQ4IiwiYWRkcmVec2VzIjpbIjEwLjEwLjEwLjE2NDo4NDQzIl0sInNlY3JldCI6IjNkZjU5YjM3YWVhOTMzOTg2NGU5OTdjMTNkODBhMWE2NmU5ZTY4YjE2ZjIzNDljZWZhYWIzOGEyOTk4MTEyNDUiLCJleHBpcmVzX2F0IjoiMjAyNC0xMC0wMVQxOToxNzowMS45MzMyNjI5ODRuIn0=
root@node1:~# logout
$ 

We created the bootstrap cluster member and then we generated a join token for the next node, node2. Let’s add node2 in the cluster.

$ incus shell node2
root@node2:~# apt install -y incus zfsutils-linux
...
root@node2:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=10.10.10.22]: 
Are you joining an existing cluster? (yes/no) [default=no]: yes
Please provide join token: eyJzZXJ2ZXJfbmFtZSI6Im5vZGUyIiwiZmluZ2VycHJpbnQiOiI5MWJkZTMwODE1YmM5NDZmZWQ3ZWViYzY5MTIyMWZkODRkZTYzMDY4MDBkZTk4NjdiM2VlN2E2OWM1ODdkNzQ4IiwiYWRkcmVzc2VzIjpbIjEwLjEwLjEwLjE2NDo4NDQzIl0sInNlY3JldCI6IjNkZjU5YjM3YWVhOTMzOTg2NGU5OTdjMTNkODBhMWE2NmU5ZTY4YjE2ZjIzNDljZWZhYWIzOGEyOTk4MTE0NDUiLCJleHBpcmVzX2F0IjoiMjAyNC0xMC0wMVQxOToxNzowMS45MzMyNjI5ODRaIn0=
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "size" property for storage pool "local": 
Choose "source" property for storage pool "local": 
Choose "zfs.pool_name" property for storage pool "local": 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: 
root@node2:~# incus cluster list
+-------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| NAME  |            URL            |      ROLES       | ARCHITECTURE | FAILURE DOMAIN | DESCRIPTION | STATE  |      MESSAGE      |
+-------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| node1 | https://10.10.10.164:8443 | database-leader  | x86_64       | default        |             | ONLINE | Fully operational |
|       |                           | database         |              |                |             |        |                   |
+-------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| node2 | https://10.10.10.22:8443  | database-standby | x86_64       | default        |             | ONLINE | Fully operational |
+-------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
root@node2:~# logout
$ 

We are cool, got the second node in place. Now we get a token from the bootstrap server for node3, then add node3 to the cluster.

$ incus shell node1
root@node1:~# incus cluster add node3
Member node3 join token:
eyJzZXJ2ZXJfbmFtZSI6Im5vZGUzIiwiZmluZ2VycHJpbnQiOiI5MWJkZTMwODE1YmM5NDZmZWQ3ZWViYzY5MTIyMWZkODRkZTYzMDY4MDBkZTk4NjdiM2VlN2E2OWM1ODdkNzQ4IiwiYWRkcmVzc2VzIjpbIjEwLjEwLjEwLjE2NDo4NDQzIiwiMTAuMTAuMTAuMjI6ODQ0MyJdLCJzZWNyZXQiOiJhNjU3YWJkMzc3YmQ4ZjQzY2IwZThhNmEyMGJhYWI0NTY0YWUwNmNmMzc5MmE1NWY4YjU5MWJmNjVjODdmNmU3IiwiZXhwaXJlc19hdCI6IjIwMjQtMTAtMDFUMTk6MjI6MzkuMjIzMTkzNDAzWiJ9
root@node1:~# logout
$ incus shell node3
root@node3:~# sudo apt install -y incus zfsutils-linux
...
root@node3:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=10.10.10.215]: 
Are you joining an existing cluster? (yes/no) [default=no]: yes
Please provide join token: eyJzZXJ2ZXJfbmFtZSI6Im5vZGUzIiwiZmluZ2VycHJpbnQiOiI5MWJkZTMwODE1YmM5NDZmZWQ3ZWViYzY5MTIyMWZkODRkZTYzMDY4MDBkZTk4NjdiM2VlN2E2OWM1ODdkNzQ4IiwiYWRkcmVzc2VzIjpbIjEwLjEwLjEwLjE2NDo4NDQzIiwiMTAuMTAuMTAuMjI6ODQ0MyJdLCJzZWNyZXQiOiJhNjU3YWJkMzc3YmQ4ZjQzY2IwZThhNmEyMGJhYWI0NTY0YWUwNmNmMzc5MmE1NWY4YjU5MWJmNjVjODdmNmU3IiwiZXhwaXJlc19hdCI6IjIwMjQtMTAtMDFUMTk6MjI6MzkuMjIzMTkzNDAzWiJ9
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "size" property for storage pool "local": 
Choose "source" property for storage pool "local": 
Choose "zfs.pool_name" property for storage pool "local": 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: 
root@node3:~# incus cluster list
+-------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| NAME  |            URL            |      ROLES      | ARCHITECTURE | FAILURE DOMAIN | DESCRIPTION | STATE  |      MESSAGE      |
+-------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| node1 | https://10.10.10.164:8443 | database-leader | x86_64       | default        |             | ONLINE | Fully operational |
|       |                           | database        |              |                |             |        |                   |
+-------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| node2 | https://10.10.10.22:8443  | database        | x86_64       | default        |             | ONLINE | Fully operational |
+-------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| node3 | https://10.10.10.215:8443 | database        | x86_64       | default        |             | ONLINE | Fully operational |
+-------+---------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
root@node3:~# logout
$

Now we have three nodes in the cluster.

In your case you got some issue with the naming of the nodes. Above I used node1, node2 and node3.

Hi again,
my environment is currently setup as yours with the only difference that I’m using the Debian Image and LVM. My bare metal machine with a standalone Incus Server on it. On this standalone machine there are 3 VMs (svl0pinc01-03).

root@spl0pinc01:/home/mario# incus list
+------------+---------+-----------------------+------+-----------------+-----------+
|    NAME    |  STATE  |         IPV4          | IPV6 |      TYPE       | SNAPSHOTS |
+------------+---------+-----------------------+------+-----------------+-----------+
| svl0pinc01 | RUNNING | 10.0.100.151 (enp5s0) |      | VIRTUAL-MACHINE | 1         |
+------------+---------+-----------------------+------+-----------------+-----------+
| svl0pinc02 | RUNNING | 10.0.100.152 (enp5s0) |      | VIRTUAL-MACHINE | 1         |
+------------+---------+-----------------------+------+-----------------+-----------+
| svl0pinc03 | RUNNING | 10.0.100.153 (enp5s0) |      | VIRTUAL-MACHINE | 1         |
+------------+---------+-----------------------+------+-----------------+-----------+
root@spl0pinc01:/home/mario# incus shell svl0pinc01
Error: VM agent isn't currently running
root@spl0pinc01:/home/mario# incus shell svl0pinc01
root@svl0pinc01:~# incus list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
root@svl0pinc01:~# incus cluster list
Error: Server isn't part of a cluster
root@svl0pinc01:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=10.0.100.151]: 
Are you joining an existing cluster? (yes/no) [default=no]: 
What member name should be used to identify this server in the cluster? [default=svl0pinc01]: 
Do you want to configure a new local storage pool? (yes/no) [default=yes]: 
Name of the storage backend to use (dir, lvm) [default=dir]: lvm
Create a new LVM pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: 
Size in GiB of the new loop device (1GiB minimum) [default=5GiB]: 
Do you want to configure a new remote storage pool? (yes/no) [default=no]: 
Would you like to use an existing bridge or host interface? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: 
root@svl0pinc01:~# incus cluster add svl0pinc02
Member svl0pinc02 join token:
eyJzZXJ2ZXJfbmFtZSI6InN2bDBwaW5jMDIiLCJmaW5nZXJwcmludCI6IjExN2U5ZDg4MTdiZGI0ZWI0OTJhODc4ZjYxYzUzNGU2NTFkZjFmZjExMDU3MTAyZmM5MjkwMjQyMzNiMjhiMTciLCJhZGRyZXNzZXMiOlsiMTAuMC4xMDAuMTUxOjg0NDMiXSwic2VjcmV0IjoiM2M5YmZjYTBlOTE3MjMyMWZkYTA2ZjVhYjBlMjM1MGIwNTI3ZjU4YWM2ZDBlNzM3MzE3YjYzNWEyZDFiMTQyMiIsImV4cGlyZXNfYXQiOiIyMDI0LTEwLTAxVDIyOjI1OjQwLjkzOTk5MDI3MyswMjowMCJ9
root@svl0pinc01:~# 
logout
root@spl0pinc01:/home/mario# incus shell svl0pinc02
root@svl0pinc02:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=10.0.100.152]: 
Are you joining an existing cluster? (yes/no) [default=no]: yes
Please provide join token: eyJzZXJ2ZXJfbmFtZSI6InN2bDBwaW5jMDIiLCJmaW5nZXJwcmludCI6IjExN2U5ZDg4MTdiZGI0ZWI0OTJhODc4ZjYxYzUzNGU2NTFkZjFmZjExMDU3MTAyZmM5MjkwMjQyMzNiMjhiMTciLCJhZGRyZXNzZXMiOlsiMTAuMC4xMDAuMTUxOjg0NDMiXSwic2VjcmV0IjoiM2M5YmZjYTBlOTE3MjMyMWZkYTA2ZjVhYjBlMjM1MGIwNTI3ZjU4YWM2ZDBlNzM3MzE3YjYzNWEyZDFiMTQyMiIsImV4cGlyZXNfYXQiOiIyMDI0LTEwLTAxVDIyOjI1OjQwLjkzOTk5MDI3MyswMjowMCJ9
Error: Certificate fingerprint mismatch between join token and cluster member "10.0.100.151:8443"

Tried with ubuntu today, got same result like you.
I noticed that the ubuntu repo is still on 6.0.0, back then when trying Incus the first time while using Debian I also got 6.0.0 or some early release of 6.0.1 (cannot remember exactly), and it worked out without problems. Upgraded then to the stable branch from zabbly repo and wanted to add some more servers to the cluster, but experienced given problem. I suspect a bug or a incompatibility with a library, as my steps are always following the instructions in the documentation.

I think I found the root cause of this issue.
Incus cannot handle if there is a predefined bridge network before creating a cluster.

Ah yeah, the issue is that the token includes all the IP addresses of the cluster you’re trying to join. If your local system already has one of those IP addresses, then your system will attempt to join through that instead of the existing incus cluster, this then fails because of the certificate mismatch.