I’m setting up a test incus cluster (actually inside some incus VMs, but that doesn’t really matter here).
The VMs are Ubuntu 24.04, and I’m using incus from the zabbly stable-6.0 repo.
Inside node “drbd1” I set up the bootstrap node fine:
root@drbd1:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=10.65.5.185]: drbd1
Are you joining an existing cluster? (yes/no) [default=no]:
What member name should be used to identify this server in the cluster? [default=drbd1]:
Do you want to configure a new local storage pool? (yes/no) [default=yes]:
Do you want to configure a new remote storage pool? (yes/no) [default=no]:
Would you like to use an existing bridge or host interface? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: no
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config:
core.https_address: drbd1:8443
images.auto_update_interval: "0"
networks: []
storage_pools:
- config: {}
description: ""
name: local
driver: dir
profiles:
- config: {}
description: ""
devices:
root:
path: /
pool: local
type: disk
name: default
projects: []
cluster:
server_name: drbd1
enabled: true
member_config: []
cluster_address: ""
cluster_certificate: ""
server_address: ""
cluster_token: ""
cluster_certificate_path: ""
I just want the simplest of storage - local only - as I plan to add the networked storage later. Note that it didn’t ask many anything about the local storage pool, and it created a pool called “local” using the “dir” driver and with no attributes.
root@drbd1:~# incus storage show local
config: {}
description: ""
name: local
driver: dir
used_by:
- /1.0/profiles/default
status: Created
locations:
- drbd1
Then I tried to join the second node, but that’s where storage became a problem.
root@drbd2:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=10.65.5.140]: drbd2
Are you joining an existing cluster? (yes/no) [default=no]: yes
Please provide join token: eyJzZXJ2ZXJfbmFtZSI6ImRyYmQyIiwiZmluZ2VycHJpbnQiOiJiNTliN2U2NWYwMDFjOGIzMjY3ZWNmMzcyNTU1NWJiMjA4ZTZmZTM4NWQxN2NjODcyYTYyNGUyM2Y4YmU0N2I3IiwiYWRkcmVzc2VzIjpbImRyYmQxOjg0NDMiXSwic2VjcmV0IjoiZWRiOTg1YzVkNmYyNDY2MDJmYzczNGNjY2E1ODY4NTQ1OTQ2NGI4NGM4MWU2MDExZWFlYTU5NmQ3ODY5OTZjMSIsImV4cGlyZXNfYXQiOiIyMDI0LTA1LTEwVDEzOjUyOjM4LjI0MzI4NzMwNVoifQ==
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "local":
- Unlike the initial node, it didn’t ask me whether I wanted to create a new local storage pool or not; it simply assumed that I did - which is fine, because I did. (Perhaps because there was no networked storage on the cluster?)
- What am I supposed to answer for the “source”? I wasn’t asked this question when creating the bootstrap node, and that pool doesn’t have a “source” property.
So I guessed at “dir”:
Choose "source" property for storage pool "local": dir
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks: []
storage_pools: []
profiles: []
projects: []
cluster:
server_name: drbd2
enabled: true
member_config:
- entity: storage-pool
name: local
key: source
value: dir
description: '"source" property for storage pool "local"'
cluster_address: drbd1:8443
cluster_certificate: << SNIP >>
server_address: drbd2:8443
cluster_token: ""
cluster_certificate_path: ""
Error: Failed to join cluster: Failed to initialize member: Failed to initialize storage pools and networks: Failed to create storage pool "local": Source path 'dir' doesn't exist
I tried again, this time leaving the “source” property blank, and it succeeded.
Choose "source" property for storage pool "local":
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks: []
storage_pools: []
profiles: []
projects: []
cluster:
server_name: drbd2
enabled: true
member_config:
- entity: storage-pool
name: local
key: source
value: ""
description: '"source" property for storage pool "local"'
cluster_address: drbd1:8443
cluster_certificate: << SNIP >>
server_address: drbd2:8443
cluster_token: ""
cluster_certificate_path: ""
root@drbd2:~# incus storage list
+-------+--------+-------------+---------+---------+
| NAME | DRIVER | DESCRIPTION | USED BY | STATE |
+-------+--------+-------------+---------+---------+
| local | dir | | 1 | CREATED |
+-------+--------+-------------+---------+---------+
root@drbd2:~# incus storage show local
config: {}
description: ""
name: local
driver: dir
used_by:
- /1.0/profiles/default
status: Created
locations:
- drbd1
- drbd2
That looks fine, but I think the process was confusing. Why prompt for the “source” on the second node, but not on the first? And it was not clear that source (if specified) must be a pre-existing directory, but that it’s permitted to leave it empty.
Cheers,
Brian.