Not understanding "creating an incus cluster" tutorial in the incos-os docs

Hello, I have a couple of dell 3k hosts I’m using to play with incus-os. I created the usb stick from the web configuration and both systems are installed. I’m able to add them as remotes, create containers and vms.

I’m struggling to follow Creating an Incus cluster - IncusOS documentation

I have two systems incus1 172.16.11.141 and incus2 172.16.11.142 and following along here

`incus config set server1 cluster.https_address=10.0.0.10:8443
incus cluster enable server1: server1

incus remote add my-cluster 10.0.0.10:8443
incus remote remove server1`

I tried this:
➜ ~ incus config set incus1 cluster.https_address=172.16.11.141:8443
Error: Failed to fetch instance “incus1” in project “default”: Instance not found

FWIW
➜ ~ incus project list
±------------------±-------±---------±----------------±----------------±---------±--------------±----------------------±--------+
| NAME | IMAGES | PROFILES | STORAGE VOLUMES | STORAGE BUCKETS | NETWORKS | NETWORK ZONES | DESCRIPTION | USED BY |
±------------------±-------±---------±----------------±----------------±---------±--------------±----------------------±--------+
| default (current) | YES | YES | YES | YES | YES | YES | Default Incus project | 4 |
±------------------±-------±---------±----------------±----------------±---------±--------------±----------------------±--------+
➜ ~ incus remote list
±-----------------±-----------------------------------±--------------±------------±-------±-------±-------+
| NAME | URL | PROTOCOL | AUTH TYPE | PUBLIC | STATIC | GLOBAL |
±-----------------±-----------------------------------±--------------±------------±-------±-------±-------+
| images | ``https://images.linuxcontainers.org`` | simplestreams | none | YES | NO | NO |
±-----------------±-----------------------------------±--------------±------------±-------±-------±-------+
| incus1 (current) | ``https://172.16.11.141:8443`` | incus | oidc | NO | NO | NO |
±-----------------±-----------------------------------±--------------±------------±-------±-------±-------+
| incus2 | ``https://172.16.11.142:8443`` | incus | oidc | NO | NO | NO |
±-----------------±-----------------------------------±--------------±------------±-------±-------±-------+
| local | unix:// | incus | file access | NO | YES | NO |
±-----------------±-----------------------------------±--------------±------------±-------±-------±-------+

I’m not understanding something pretty basic in that document.

Thanks!!

Alan

Oops, there’s a typo in the guide, it should be:

incus config set incus1:cluster.https_address=172.16.11.141:8443

➜ ~ incus config set incus1:cluster.https_address=172.16.11.141:8443
Error: cannot set ‘incus1:cluster.https_address’ to ‘172.16.11.141:8443’: unknown key

I feel like i’m missing some step:



➜  ~ incus config show
config:
core.https_address: :8443
oidc.claim: preferred_username
oidc.client.id: “358632973244902523”
oidc.issuer: https://sso.linuxcontainers.org
oidc.scopes: openid,offline_access
storage.backups_volume: local/backups
storage.images_volume: local/images
➜  ~ incus config show incus1
Error: Failed to fetch instance “incus1” in project “default”: Instance not found

Nah, just me being bad at that syntax :wink:

incus config set incus1: cluster.https_address=172.16.11.141:8443

Thank you for the help!! I have a cluster running now on a single node. I used the same USB stick to setup all four nodes, ran into storage issues adding nodes 2-4 to the cluster and realize that by using the same stick, I applied default settings to nodes 2-4.

Is there a way to clean these nodes once they are remotes?

I tried this:

incus remote switch incus2

incus config unset storage.backups_volume
incus config unset storage.images_volume
incus storage volume delete local backups
incus storage volume delete local images
incus storage delete local

~ incus cluster join cluster:
What IP address or DNS name should be used to reach this server? [default=172.16.11.142]:
What member name should be used to identify this server in the cluster? [default=4c4c4544-0038-5110-8048-c7c04f315633]:
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose “source” property for storage pool “local”: incus/local
Choose “zfs.pool_name” property for storage pool “local”: incus/local
Error: Failed to join cluster: Failed to initialize member: Failed to initialize storage pools and networks: Failed to create storage pool “local”: Failed to run: zfs create -o mountpoint=legacy incus/local: exit status 1 (cannot create ‘incus/local’: no such pool ‘incus’)

mbp16➜ ~ incus storage list
±-----±-------±------------±--------±------+
| NAME | DRIVER | DESCRIPTION | USED BY | STATE |
±-----±-------±------------±--------±------+
mbp16➜ ~ incus remote list
±-----------------±-----------------------------------±--------------±------------±-------±-------±-------+
| NAME | URL | PROTOCOL | AUTH TYPE | PUBLIC | STATIC | GLOBAL |
±-----------------±-----------------------------------±--------------±------------±-------±-------±-------+
| cluster | https://172.16.11.141:8443 | incus | oidc | NO | NO | NO |
±-----------------±-----------------------------------±--------------±------------±-------±-------±-------+
| images | https://images.linuxcontainers.org | simplestreams | none | YES | NO | NO |
±-----------------±-----------------------------------±--------------±------------±-------±-------±-------+
| incus2 (current) | https://172.16.11.142:8443 | incus | oidc | NO | NO | NO |
±-----------------±-----------------------------------±--------------±------------±-------±-------±-------+
| incus3 | https://172.16.11.143:8443 | incus | oidc | NO | NO | NO |
±-----------------±-----------------------------------±--------------±------------±-------±-------±-------+
| incus4 | https://172.16.11.144:8443 | incus | oidc | NO | NO | NO |
±-----------------±-----------------------------------±--------------±------------±-------±-------±-------+
| local | unix:// | incus | file access | NO | YES | NO |
±-----------------±-----------------------------------±--------------±------------±-------±-------±-------+

It might be faster to just create another usb key without the default settings applied…. I have to hook up console to each, flip over to no secure boot, pxe into an alpine install to wipe the root, toggle back to secure and install the new key. lol. A few steps.

I really wish I had a iPXE secure boot pattern. Then I could build a secure boot disk wipe and that would save a bunch of steps.

oh bother. i created a new stick without default settings applied and reimaged incus2. Added it as a remote and did:

incus cluster join cluster: incus2:

and then

mbp16➜ ~ incus cluster join cluster: incus2:
What IP address or DNS name should be used to reach this server? [default=172.16.11.142]:
What member name should be used to identify this server in the cluster? [default=4c4c4544-0038-5110-8048-c7c04f315633]: incus2
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose “source” property for storage pool “local”: incus/local
Choose “zfs.pool_name” property for storage pool “local”: incus/local
Error: Failed to join cluster: Failed to initialize member: Failed to initialize storage pools and networks: Failed to create storage pool “local”: Failed to run: zfs create -o mountpoint=legacy incus/local: exit status 1 (cannot create ‘incus/local’: no such pool ‘incus’)

I tried the incus/local as I’d read elsewhere in the forum. Did I get that right?

local/incus

Thanks!!

I have a cluster!!