OK so I’ve recreated your issue by removing the key manually:
lxd sql global 'delete from storage_pools_config where value="LXDThinPool"'
root@cluster-v3:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=cluster-v3]:
What IP address or DNS name should be used to reach this node? [default=10.109.89.60]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
IP address or FQDN of an existing cluster node: 10.109.89.20
Cluster fingerprint: 4f7cefc7b40d0d525d11cc6b05a30bcbb24ff3cd0564944fb270582fdaeffaae
You can validate this fingerprint by running "lxc info" locally on an existing node.
Is this the correct fingerprint? (yes/no) [default=no]: yes
Cluster trust password:
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "lvm.vg_name" property for storage pool "local":
Choose "size" property for storage pool "local":
Choose "source" property for storage pool "local":
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Error: Failed to join cluster: Failed request to add member: Mismatching config for storage pool local: different values for keys: lvm.thinpool_name
I don’t know how it happened in your case, probably something to do with mixing older and newer versions of the code and trying to join to older node. However I’ve not been able to re-create that issue with LXD 4.3.
To fix it, here is what I did:
First wipe the new node you’re trying to add to the cluster so there is no existing LVM config or LXD config:
snap remove lxd
vgremove /dev/local
pvremove /dev/local
reboot
Now lets fix the missing config key in your first node:
lxc shell cluster-v1
lxd sql global 'insert into storage_pools_config(storage_pool_id,key,value) VALUES(1,"lvm.thinpool_name","LXDThinPool")'
Now lets reinstall LXD on your new node:
lxc shell cluster-v3
snap install lxd
lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=cluster-v3]:
What IP address or DNS name should be used to reach this node? [default=10.109.89.60]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
IP address or FQDN of an existing cluster node: 10.109.89.20
Cluster fingerprint: 4f7cefc7b40d0d525d11cc6b05a30bcbb24ff3cd0564944fb270582fdaeffaae
You can validate this fingerprint by running "lxc info" locally on an existing node.
Is this the correct fingerprint? (yes/no) [default=no]: yes
Cluster trust password:
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "size" property for storage pool "local":
Choose "source" property for storage pool "local":
Choose "lvm.vg_name" property for storage pool "local":
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
This works for me.
However please note that this still results in your first node apparently using a loopback file for LVM.
To confirm this run pvs
on your first node, if you see something like /dev/loop3
rather than /dev/md1
then your first node isn’t using /devmd1
.
In that case if possible I would suggest blowing away node 1 (although make sure you have backups of any containers on there) and then starting again with a fresh cluster, as it means you also won’t have to do the fix above.
If not possible then you’d probably need to look at creating a new storage pool manually, moving across your containers to the new pool, then updating your profiles to use the new pool and remove the old one. At that point you can then add your 2nd node without using the fix above too.