Hello,
I am trying to setup a LINSTOR pool for the incus 3-nodes cluster.
I followed the general information found in Linstor Incus page adapting the guide to my situation.
I have an INCUS cluster composed of 3 nodes which are set up to form a cluster of LINSTOR controller-satellite (COMBINED mode). Furthermore, I have configured three other LINSTOR satellites to serve as an Incus shared pool.
Here is my LINSTOR node list:
root@s31:~# linstor node list
╭────────────────────────────────────────────────────────╮
┊ Node ┊ NodeType ┊ Addresses ┊ State ┊
╞════════════════════════════════════════════════════════╡
┊ s31 ┊ COMBINED ┊ 192.168.1.231:3366 (PLAIN) ┊ Online ┊
┊ s32 ┊ COMBINED ┊ 192.168.1.232:3366 (PLAIN) ┊ Online ┊
┊ s33 ┊ COMBINED ┊ 192.168.1.233:3366 (PLAIN) ┊ Online ┊
┊ s34 ┊ SATELLITE ┊ 192.168.1.234:3366 (PLAIN) ┊ Online ┊
┊ s35 ┊ SATELLITE ┊ 192.168.1.235:3366 (PLAIN) ┊ Online ┊
┊ s36 ┊ SATELLITE ┊ 192.168.1.236:3366 (PLAIN) ┊ Online ┊
╰────────────────────────────────────────────────────────╯
and their extended information:
root@s31:~$ linstor node info
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Node ┊ Diskless ┊ LVM ┊ LVMThin ┊ ZFS/Thin ┊ File/Thin ┊ SPDK ┊ Remote SPDK ┊ Storage Spaces ┊ Storage Spaces/Thin ┊
╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ s31 ┊ + ┊ + ┊ + ┊ - ┊ + ┊ - ┊ + ┊ - ┊ - ┊
┊ s32 ┊ + ┊ + ┊ + ┊ - ┊ + ┊ - ┊ + ┊ - ┊ - ┊
┊ s33 ┊ + ┊ + ┊ + ┊ - ┊ + ┊ - ┊ + ┊ - ┊ - ┊
┊ s34 ┊ + ┊ + ┊ + ┊ + ┊ + ┊ - ┊ + ┊ - ┊ - ┊
┊ s35 ┊ + ┊ + ┊ + ┊ + ┊ + ┊ - ┊ + ┊ - ┊ - ┊
┊ s36 ┊ + ┊ + ┊ + ┊ + ┊ + ┊ - ┊ + ┊ - ┊ - ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────╮
┊ Node ┊ DRBD ┊ LUKS ┊ NVMe ┊ Cache ┊ BCache ┊ WriteCache ┊ Storage ┊
╞═══════════════════════════════════════════════════════════════════╡
┊ s31 ┊ + ┊ + ┊ + ┊ + ┊ + ┊ + ┊ + ┊
┊ s32 ┊ + ┊ + ┊ + ┊ + ┊ + ┊ + ┊ + ┊
┊ s33 ┊ + ┊ + ┊ + ┊ + ┊ + ┊ + ┊ + ┊
┊ s34 ┊ + ┊ + ┊ + ┊ + ┊ + ┊ + ┊ + ┊
┊ s35 ┊ + ┊ + ┊ + ┊ + ┊ + ┊ + ┊ + ┊
┊ s36 ┊ + ┊ + ┊ + ┊ + ┊ + ┊ + ┊ + ┊
╰───────────────────────────────────────────────────────────────────╯
Then, I created the storage pools in each satellite node that will contribute storage to the cluster. I used LINSTOR to easily automate my pool:
root@s31:~$ linstor physical-storage create-device-pool --storage-pool incuspool --pool-name tank zfs s34 /dev/disk/by-vdev/p440ar_d1 /dev/disk/by-vdev/p440ar_d2
SUCCESS:
(s34) ZPool 'tank' on device(s) [/dev/disk/by-vdev/p440ar_d1, /dev/disk/by-vdev/p440ar_d2] created.
SUCCESS:
Successfully set property key(s): StorDriver/StorPoolName
SUCCESS:
Description:
New storage pool 'incuspool' on node 's34' registered.
Details:
Storage pool 'incuspool' on node 's34' UUID is: 4842fcd3-55f6-429d-9010-49e770421c3d
SUCCESS:
(s34) Changes applied to storage pool 'incuspool' of node 's34'
SUCCESS:
Storage pool updated on 's34'
root@s31:~$ linstor physical-storage create-device-pool --storage-pool incuspool --pool-name tank zfs s35 /dev/disk/by-vdev/p440ar_d1 /dev/disk/by-vdev/p440ar_d2
SUCCESS:
(s35) ZPool 'tank' on device(s) [/dev/disk/by-vdev/p440ar_d1, /dev/disk/by-vdev/p440ar_d2] created.
SUCCESS:
Successfully set property key(s): StorDriver/StorPoolName
SUCCESS:
Description:
New storage pool 'incuspool' on node 's35' registered.
Details:
Storage pool 'incuspool' on node 's35' UUID is: 99845419-daae-4af3-b234-7b9378db3bf6
SUCCESS:
(s35) Changes applied to storage pool 'incuspool' of node 's35'
SUCCESS:
Storage pool updated on 's35'
root@s31:~$ linstor physical-storage create-device-pool --storage-pool incuspool --pool-name tank zfs s36 /dev/disk/by-vdev/p440ar_d1 /dev/disk/by-vdev/p440ar_d2
SUCCESS:
(s36) ZPool 'tank' on device(s) [/dev/disk/by-vdev/p440ar_d1, /dev/disk/by-vdev/p440ar_d2] created.
SUCCESS:
Successfully set property key(s): StorDriver/StorPoolName
SUCCESS:
Description:
New storage pool 'incuspool' on node 's36' registered.
Details:
Storage pool 'incuspool' on node 's36' UUID is: 41119fc0-4b2b-4ea2-a287-99d4167010b2
SUCCESS:
(s36) Changes applied to storage pool 'incuspool' of node 's36'
SUCCESS:
Storage pool updated on 's36'
root@s31:~$
I verified that all storage pools are created and report the expected size:
root@s31:~$ linstor storage-pool list
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ s31 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ s31;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ s32 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ s32;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ s33 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ s33;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ s34 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ s34;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ s35 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ s35;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ s36 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ s36;DfltDisklessStorPool ┊
┊ incuspool ┊ s34 ┊ ZFS ┊ tank ┊ 7.12 TiB ┊ 7.25 TiB ┊ True ┊ Ok ┊ s34;incuspool ┊
┊ incuspool ┊ s35 ┊ ZFS ┊ tank ┊ 7.12 TiB ┊ 7.25 TiB ┊ True ┊ Ok ┊ s35;incuspool ┊
┊ incuspool ┊ s36 ┊ ZFS ┊ tank ┊ 7.12 TiB ┊ 7.25 TiB ┊ True ┊ Ok ┊ s36;incuspool ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Next I configured the incus nodes (s31,s32,s33) to communicate to the respective controller. Each incus nodes are themselves controller, so I used the localhost as the IP address to communicate between the incus and controller nodes:
root@s31:~$ incus config show --target s31
config:
core.https_address: 0.0.0.0:8443
storage.linstor.controller_connection: http://localhost:3370
root@s31:~$ incus config show --target s32
config:
core.https_address: 0.0.0.0:8443
storage.linstor.controller_connection: http://localhost:3370
root@s31:~$ incus config show --target s33
config:
core.https_address: 0.0.0.0:8443
storage.linstor.controller_connection: http://localhost:3370
I then created the storage pool on Incus, specifying the linstor.resource_group.storage_pool option to ensure that LINSTOR uses the incuspool storage pool for the volumes:
binda@s31:~$ incus storage create sharedpool linstor --target s31
Storage pool sharedpool pending on member s31
binda@s31:~$ incus storage create sharedpool linstor --target s32
Storage pool sharedpool pending on member s32
binda@s31:~$ incus storage create sharedpool linstor --target s33
Storage pool sharedpool pending on member s33
binda@s31:~$ incus storage create sharedpool linstor linstor.resource_group.storage_pool=incuspool
Error: failed to notify peer 192.168.1.233:8443: 404 Not Found
binda@s31:~$
The call failed, and the storage pool reports the error:
root@s31:~$ incus storage show sharedpool
config:
drbd.auto_add_quorum_tiebreaker: "true"
drbd.on_no_quorum: suspend-io
linstor.resource_group.name: sharedpool
linstor.resource_group.place_count: "2"
linstor.resource_group.storage_pool: incuspool
linstor.volume.prefix: incus-volume-
volatile.pool.pristine: "true"
description: ""
name: sharedpool
driver: linstor
used_by: []
status: Errored
locations:
- s31
- s33
- s32
Incus has correctly created the resource group:
root@s31:~$ linstor resource-group list
╭──────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceGroup ┊ SelectFilter ┊ VlmNrs ┊ Description ┊
╞══════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltRscGrp ┊ PlaceCount: 2 ┊ ┊ ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ sharedpool ┊ PlaceCount: 2 ┊ ┊ Resource group managed by Incus ┊
┊ ┊ StoragePool(s): incuspool ┊ ┊ ┊
╰──────────────────────────────────────────────────────────────────────────────────────╯
root@s31:~$
I was not able to find any flaw in my configuration, and sadly, this is as far as my knowledge goes.
I would like to know if any of you knows better …
Thanks,
Giovanni