Incus clustering with clusteredLVM

So the idea of the clusteredLVM storage driver is something that i would like to play with as it offers NvME-OF or NvME-OTCP … but, I see that when setting up Incus it asks the following…

Do you want to configure a new remote storage pool? (yes/no) [default=no]: yes
Create a new LVMCLUSTER pool? (yes/no) [default=yes]: yes
Name of the shared LVM volume group:

Now i know that my U.2 drives can use NvME-OF and i can transmit every block device, and from what i understand about LVM it doesnt create block devices. I dont understand how I should be sharing a volume group as the setup asks for.

Do I create the volume group on each node im sharing with ? (i.e i have dell-1 and dell-2, and say that dell-1 is the target (storage server) and dell-2 is the initiator so ill have all the nvme drives that come through the network .)

Do i create the volume group first and then share that ? if so, where do i find the path for this?

Im using a mixture of Debian 10 nodes (5 of them that would be the initiators) because i use the MOFED driver with the connectx-3 pro cards, and also a Debian 12 node with a connectx-6 DX card.

  1. Make sure you can see the exact same disk on all your servers
  2. Setup lvmlockd and sanlock
  3. Create a shared VG using your common disk as PV
  4. Tell Incus to use that VG

I understand, but where is the VG made? Would it be in each server locally?

I dont believe i can use NvME-OF with a VG right? Im trying to understand exactly how to create a shared VG while leveraging NvME-OF or NvME-OTCP .

vgcreate --shared vg0 /dev/disk/by-id/your-device

Then assuming that /dev/disk/by-id/your-device exists on all your machines and the rest of lvmlockd/sanlock was configured correctly, you will see the same VG in the vgs output.

oh ok thank you so much for that

I have one last question for this…

for the part that says

  • Set a unique (within your cluster) host_id value in /etc/lvm/lvmlocal.conf

is that instructing to set a unique host_id for every host (i.e host-1 would have the value of 1, host-2 would have the value of 2, etc.)?

or

Should I have a unique id for the whole cluster? (i.e host-1 has 1, host-2 has 2, etc.)

Hi

See below

/etc/lvm/lvm.conf
global {
use_lvmlockd = 1
lvmlockd_lock_retries = 3
system_id_source = “lvmlocal”
}

/etc/lvm/lvmlocal.conf must be different at both server:

server1
local {
system_id = “server_a”
host_id = 3
}

local {
system_id = “server_b”
host_id = 8
}

Hope it helps

1 Like

Thank you @paulo_bruck