Cluster docs missleading?

I try to setup a LXD cluster.
And according to the docs one should/can use /var/lib/lxd/server.crt.

First of all this is probably wrong for most people because with snap the path becomes /var/snap/lxd/common/lxd/server.crt.

And then this cert is only for
Which is by design because LXD does not care about this.

Except it does care?

Error: Failed to join cluster: Failed to setup cluster trust: failed to connect to target cluster node: 
Get \"\": x509: certificate is valid for, ::1, not

Am I holding it wrong or should the docs recommend to generate custom certs if I want to setup a cluster?
Or is this even a bug in LXD that it shouldn’t check for the hostname?

LXD doesn’t check the DNS records or addresses in the cert, instead we just check that the certificate is a perfect match with the one the client has.

In general, when connecting to a cluster, cluster.crt is what should be expected from the LXD API, not server.crt. This will become even more important as we’re now working on decoupling the two, making them always different and having server.crt be used only for internal node-to-node traffic.

The reason why you get this error is just how Go’s TLS stack works when it doesn’t find an exact certificate match, when that happens we fallback to the system’s normal CA handling so that users can use a valid TLS certificate, that handler does expect a valid DNS/IP match and so you get that error. But in your case, it means you didn’t supply the correct certificate.

1 Like

So I should configure cluster.key and cluster.crt on the bootstrap node.
And have a config like this

  server_name: server1
  enabled: true
  member_config: []
  cluster_password: "1234"

And then use something like this on joining nodes:

  enabled: true
  server_name: "{{ansible_hostname}}"
  server_address: "{{ ansible_eth1.ipv4.address }}:8443"
  cluster_address: 192.168.XX.XX:8443
  cluster_certificate: |
    same thing as the cluster.crt on bootstrap node
    -----END CERTIFICATE-----
  cluster_password: "1234"

I put all that info in a vagrant setup if someone want’s to check it out