Add member to existing cluster with ceph storage pool

Hi - I’m trying to add a new member to an existing “cluster” (only a single machine at the moment) that’s using ceph storage. I’m getting stuck with:

Error: Failed to join cluster: Failed to initialize member: Failed to initialize storage pools and networks: Failed to update storage pool "[existing ceph pool name]": Config key "source" is cluster member specific

Is it not possible to add a member to a cluster with a ceph storage pool if it’s already got containers in it? The whole point is that they can be shared between hosts. Is there some magic I need to use with the init yaml? Init interview is here -

# incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=ip.addre.ss]: new.server.name
Are you joining an existing cluster? (yes/no) [default=no]: yes
Please provide join token: <join token from existing server>
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "shared_ceph": existing-ceph-pool
Choose "lvm.thinpool_name" property for storage pool "local": local_incus
Choose "lvm.vg_name" property for storage pool "local": local_incus
Choose "source" property for storage pool "local": /dev/md1
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes

Yaml config is here

config: {}
networks: []
storage_pools: []
profiles: []
projects: []
cluster:
  server_name: new.server.name
  enabled: true
  member_config:
  - entity: storage-pool
    name: shared_ceph
    key: source
    value: existing-ceph-pool
    description: '"source" property for storage pool "existing-ceph-pool"'
  - entity: storage-pool
    name: local
    key: lvm.thinpool_name
    value: local_incus
    description: '"lvm.thinpool_name" property for storage pool "local"'
  - entity: storage-pool
    name: local
    key: lvm.vg_name
    value: local_incus
    description: '"lvm.vg_name" property for storage pool "local"'
  - entity: storage-pool
    name: local
    key: source
    value: /dev/md1
    description: '"source" property for storage pool "local"'
  cluster_address: existing.server.name:8443
  cluster_certificate: |
    -----BEGIN CERTIFICATE-----
<cert>
    -----END CERTIFICATE-----
  server_address: new.server.name:8443
  cluster_token: ""
  cluster_certificate_path: ""

Thanks!

I suppose as a work around - I can add each machine with its own separate pool. Then once the cluster is complete, I can add a new empty pool in the cluster, available on all the members, and move the containers to the new shared pool. That should work…
But it will be a fair bit of bouncing everything around.

I wiped /var/lib/incus out, reinstalled incus, and then was able to complete adding the new host to the custer by simply leaving the source: promt blank in the interview. No need for bouncing between pools.