Hi,
What is the proper way of adding host to the cluster?
Regards.
Hi,
What is the proper way of adding host to the cluster?
Regards.
I have tried as follows. But getting error message. Can someone assist me about that?
```
indiana@tnode4:~$ sudo incus admin init
[sudo] password for indiana:
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=192.168.1.204]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
Please provide join token: eyJzZXJ2ZXJfbmFtZSI6InRub2RlNCIsImZpbmdlcnByaW50IjoiYzUxNWUxNzcyNzM4MmQ0NTQ5Y2VhMGM5YWFhMmE3YjEzODdjMDczZjRmZjQyZThkMDBiYTQ0Mjg2ODIzMzlkMSIsImFkZHJlc3NlcyI6WyIxOTIuMTY4LjEuMjAxOjg0NDMiLCIxOTIuMTY4LjEuMjAyOjg0NDMiLCIxOTIuMTY4LjEuMjAzOjg0NDMiXSwic2VjcmV0IjoiZjNlMWY5MWQ1NjllZTg2NTY3NWIwOWE0OTczMDJkZjllZDM0OTA2ZDU0YjE1YWRlYWMwNGU1ODVhMTY1YWNjNCIsImV4cGlyZXNfYXQiOiIyMDI2LTAzLTI5VDE0OjU0OjE0LjMyMTY1Mzk4MyswMzowMCJ9
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose “parent” property for network “macvlan”:
Choose “lvm.thinpool_name” property for storage pool “lvmpool”:
Choose “lvm.vg_name” property for storage pool “lvmpool”:
Choose “source” property for storage pool “lvmpool”:
Choose “lvm.vg_name” property for storage pool “remote-nvme”:
Choose “source” property for storage pool “remote-nvme”:
Would you like a YAML “init” preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
storage_pools:
storage_volumes:
profiles:
projects:
certificates:
cluster_groups:
cluster:
server_name: tnode4
enabled: true
member_config:
Error: Failed to join cluster: Failed to initialize member: Failed to initialize storage pools and networks: Failed to update storage pool “lvmpool”: Config key “lvm.thinpool_name” is cluster member specific```
You’re supposed to provide answers to those questions if you want it to work.
You can usually look for some likely values by using incus network show or incus storage show with the --target flag against an existing server, it will show you the value that server is currently using for its server-specific configuration.
Thanks for the feedback Graber.
I filled the questions but now getting the following error.
Error: Failed to join cluster: Invalid server address “yes:8443”: Couldn’t resolve “yes”
Regards.
Sounds like you may have answered yes to:
What IP address or DNS name should be used to reach this server? [default=192.168.1.204]:
When it clearly is asking for an IP address or DNS name ![]()
Yepp, my mistake.
Now enter the right values but getting the same error.
Error: Failed to join cluster: Failed to initialize member: Failed to initialize storage pools and networks: Failed to update storage pool “lvmpool”: Config key “lvm.vg_name” is cluster member specific
Here are my storage values:
indiana@tnode1:~$ incus storage show lvmpool --target tnode1
config:
lvm.thinpool_name: IncusThinPool
lvm.vg_name: lvmpool
source: lvmpool
volatile.initial_source: /dev/nvme0n1
description: Local NVME Storage
name: lvmpool
driver: lvm
indiana@tnode1:~$ incus storage show remote-nvme --target tnode1
config:
lvm.vg_name: shared_vg
size: “4000783007744”
source: shared_vg
volatile.initial_source: shared_vg
description: “”
name: remote-nvme
driver: lvmcluster
Regards.
Here are the full answers to the questions.
indiana@tnode4:~$ sudo incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=192.168.1.204]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
Please provide join token: eyJzZXJ2ZXJfbmFtZSI6InRub2RlNCIsImZpbmdlcnByaW50IjoiYzUxNWUxNzcyNzM4MmQ0NTQ5Y2VhMGM5YWFhMmE3YjEzODdjMDczZjRmZjQyZThkMDBiYTQ0Mjg2ODIzMzlkMSIsImFkZHJlc3NlcyI6WyIxOTIuMTY4LjEuMjAxOjg0NDMiLCIxOTIuMTY4LjEuMjAyOjg0NDMiLCIxOTIuMTY4LjEuMjAzOjg0NDMiXSwic2VjcmV0IjoiNDg4ZjRkYmE0NWFjNmNhZjRmYjU0YWY4NzUwZDJmZjhlMGYzMWFkNmUzNDZiMGM1NjM4ZmI3N2JkYTJiMzllMSIsImV4cGlyZXNfYXQiOiIyMDI2LTAzLTMwVDIzOjE5OjU5LjM3NzcyMTMyMiswMzowMCJ9
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose “parent” property for network “macvlan”: eth0
Choose “lvm.thinpool_name” property for storage pool “lvmpool”: IncusThinPool
Choose “lvm.vg_name” property for storage pool “lvmpool”: lvmpool
Choose “source” property for storage pool “lvmpool”: lvmpool
Choose “lvm.vg_name” property for storage pool “remote-nvme”: shared_vg
Choose “source” property for storage pool “remote-nvme”: shared_vg
Would you like a YAML “init” preseed to be printed? (yes/no) [default=no]: yes
That’s weird. Can you check that the server that just failed to join doesn’t have anything in its incus storage list?
If it doesn’t, then incus admin sql global "SELECT * FROM storage_pools_config" on an existing cluster server may be useful.
Ohh, bingo.
I think that, this is the problem. That needs to be deleted right?
indiana@tnode4:~$ incus storage list
+---------+--------+--------------------+---------+-------------+
| NAME | DRIVER | DESCRIPTION | USED BY | STATE |
+---------+--------+--------------------+---------+-------------+
| lvmpool | lvm | Local NVME Storage | 0 | UNAVAILABLE |
+---------+--------+--------------------+---------+-------------+
Yeah, that will need to go away. Hopefully that’s the only issue, otherwise you may need to wipe /var/lib/incus and reboot the system to get a proper clean state with no remaining kernel state or anything left behind.
Interesting, deleted the storage and wipe out the /var/lib/incus directory, reboot the host but still getting error.
Error: Failed to join cluster: Failed to initialize member: Failed to initialize storage pools and networks: Failed to create storage pool “lvmpool”: The requested volume group “lvmpool” does not exist
Soon I have added the node to the cluster, first I have deleted the second storage remote lvm and changed the use_lvmlockd = 0 value and restart the monitor service of lvm. Then apply the incus cluster add command, now everything looks fine. If both the lvm and lvmcluster driver installed on the system, it needs to be disabled or removed from the cluster first, I suppose.
Regards.