Cluster creation nit - confusing prompt for storage source when joining second node

I’m setting up a test incus cluster (actually inside some incus VMs, but that doesn’t really matter here).

The VMs are Ubuntu 24.04, and I’m using incus from the zabbly stable-6.0 repo.

Inside node “drbd1” I set up the bootstrap node fine:

root@drbd1:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=10.65.5.185]: drbd1
Are you joining an existing cluster? (yes/no) [default=no]:
What member name should be used to identify this server in the cluster? [default=drbd1]:
Do you want to configure a new local storage pool? (yes/no) [default=yes]:
Do you want to configure a new remote storage pool? (yes/no) [default=no]:
Would you like to use an existing bridge or host interface? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: no
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config:
  core.https_address: drbd1:8443
  images.auto_update_interval: "0"
networks: []
storage_pools:
- config: {}
  description: ""
  name: local
  driver: dir
profiles:
- config: {}
  description: ""
  devices:
    root:
      path: /
      pool: local
      type: disk
  name: default
projects: []
cluster:
  server_name: drbd1
  enabled: true
  member_config: []
  cluster_address: ""
  cluster_certificate: ""
  server_address: ""
  cluster_token: ""
  cluster_certificate_path: ""

I just want the simplest of storage - local only - as I plan to add the networked storage later. Note that it didn’t ask many anything about the local storage pool, and it created a pool called “local” using the “dir” driver and with no attributes.

root@drbd1:~# incus storage show local
config: {}
description: ""
name: local
driver: dir
used_by:
- /1.0/profiles/default
status: Created
locations:
- drbd1

Then I tried to join the second node, but that’s where storage became a problem.

root@drbd2:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=10.65.5.140]: drbd2
Are you joining an existing cluster? (yes/no) [default=no]: yes
Please provide join token: eyJzZXJ2ZXJfbmFtZSI6ImRyYmQyIiwiZmluZ2VycHJpbnQiOiJiNTliN2U2NWYwMDFjOGIzMjY3ZWNmMzcyNTU1NWJiMjA4ZTZmZTM4NWQxN2NjODcyYTYyNGUyM2Y4YmU0N2I3IiwiYWRkcmVzc2VzIjpbImRyYmQxOjg0NDMiXSwic2VjcmV0IjoiZWRiOTg1YzVkNmYyNDY2MDJmYzczNGNjY2E1ODY4NTQ1OTQ2NGI4NGM4MWU2MDExZWFlYTU5NmQ3ODY5OTZjMSIsImV4cGlyZXNfYXQiOiIyMDI0LTA1LTEwVDEzOjUyOjM4LjI0MzI4NzMwNVoifQ==
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "local": 
  1. Unlike the initial node, it didn’t ask me whether I wanted to create a new local storage pool or not; it simply assumed that I did - which is fine, because I did. (Perhaps because there was no networked storage on the cluster?)
  2. What am I supposed to answer for the “source”? I wasn’t asked this question when creating the bootstrap node, and that pool doesn’t have a “source” property.

So I guessed at “dir”:

Choose "source" property for storage pool "local": dir
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks: []
storage_pools: []
profiles: []
projects: []
cluster:
  server_name: drbd2
  enabled: true
  member_config:
  - entity: storage-pool
    name: local
    key: source
    value: dir
    description: '"source" property for storage pool "local"'
  cluster_address: drbd1:8443
  cluster_certificate: << SNIP >>
  server_address: drbd2:8443
  cluster_token: ""
  cluster_certificate_path: ""

Error: Failed to join cluster: Failed to initialize member: Failed to initialize storage pools and networks: Failed to create storage pool "local": Source path 'dir' doesn't exist

I tried again, this time leaving the “source” property blank, and it succeeded.

Choose "source" property for storage pool "local":
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks: []
storage_pools: []
profiles: []
projects: []
cluster:
  server_name: drbd2
  enabled: true
  member_config:
  - entity: storage-pool
    name: local
    key: source
    value: ""
    description: '"source" property for storage pool "local"'
  cluster_address: drbd1:8443
  cluster_certificate: << SNIP >>
  server_address: drbd2:8443
  cluster_token: ""
  cluster_certificate_path: ""

root@drbd2:~# incus storage list
+-------+--------+-------------+---------+---------+
| NAME  | DRIVER | DESCRIPTION | USED BY |  STATE  |
+-------+--------+-------------+---------+---------+
| local | dir    |             | 1       | CREATED |
+-------+--------+-------------+---------+---------+
root@drbd2:~# incus storage show local
config: {}
description: ""
name: local
driver: dir
used_by:
- /1.0/profiles/default
status: Created
locations:
- drbd1
- drbd2

That looks fine, but I think the process was confusing. Why prompt for the “source” on the second node, but not on the first? And it was not clear that source (if specified) must be a pre-existing directory, but that it’s permitted to leave it empty.

Cheers,

Brian.

Clustering is always a bit tricky when it comes to storage and networking.

Basically on the initial server, you weren’t prompted about anything because your only option at the time was dir and for some reason we don’t ask if you want to use an alternative source path when creating a dir storage pool. incus admin init doesn’t ask about every possible option as that would make it far too complex, so looks like it’s one of those where we decided not to prompt.

Incus then internally created your local storage pool which does mean assigning it a source path automatically (/var/lib/incus/storage-pools/local most likely).

When joining a new server, the new server must have the same network and storage setup as the existing server(s), so that’s why you’re not prompted about what you want to see created, you just don’t have a choice.

However Incus can’t tell if that existing local storage pool was created through incus admin init or later on by a incus storage create and therefore can’t tell if the source property is something that the user may want to customize, it therefore asks you anyway.

In your case, the correct response would have been to leave it empty, so just hitting enter when prompted for it.

Thanks. Maybe the prompt could add a hint, e.g.:

Choose "source" property for storage pool "local" [leave blank for default]:

The problem is that Incus doesn’t know that blank is a valid value.
What it knows is what config keys are considered to be system-specific rather than cluster global.

For storage, source on a dir pool can be empty, but source on a ceph pool, can’t be empty, then some of the network keys like bridge.external_interfaces can be left empty, but others like parent can’t be left empty.

It’s possible that at some point we’ll get to rework all of the config key tracking so we can actually tell those kind of subtleties and provide more details on join, but that will be in a while :slight_smile:

We’re actually taking some initial steps in that direction now by making all of our config key documentation be part of the code and so automatically parsable and exposed through our metadata endpoint, but the storage/network ones aren’t done yet as they are some of the more complex, specifically due to the varying behavior between drivers and between cluster and standalone environments.