Have 2-physical nodes to be clustered
Each node has 2 nic, 1 for 192.168.254.0/24 for regular traffic, 1 for 192.168.2.0/24 for back-end storage traffic.
I created a bridge for each, IP attached to br0 and br1
Installed lvm2, lvm2-lockd, sanlock, open-iscsi, bridge-utils
Modified lvm.conf and lvmlocal.conf
provisioned 1x iscsi target with 2x 250g luns behind it.
auto start nodes, both hosts see shared luns as both //dev/sdc and /dev/disk/by-id/xxx
created shared vgs, fs1 and fs2 respectively
vgs command shows:
Yes to both.
I am very confused by the documentation and components involved (lvmlockd vs sanlock). They both have commands but not clear how to check status on either.
The vgremove command seems unhappy regardless of state of locks. for example
root@tc49j:/# incus storage delete fspool2
Error: failed to notify peer 192.168.254.232:8443: Failed to delete the volume group for the lvm storage pool: Failed to run: vgremove -f fs2: exit status 5 (Global lock failed: check that global lockspace is started)
root@tc49j:/# incus storage delete fspool2
Error: failed to notify peer 192.168.254.232:8443: Failed to delete the volume group for the lvm storage pool: Failed to run: vgremove -f fs2: exit status 5 (Lockspace for "fs2" not stopped on other hosts)
I also can’t find a command that shows status of hosts’ locks or lock spaces, at least not yet
They are problematic at the Incus level where you’re going to need 3 servers for the database to be HA, otherwise losing the “wrong” server will take down the entire cluster.
But that shouldn’t really apply to the LVM setup.
Are the VGs functional in the current state if you try to create a volume or instance through Incus?