I’ll setup a lvmcluster but getting this error message.
Error: Failed to create storage pool “remote”: Failed to run: vgchange --addtag incus_pool incusvg: exit status 5 (Cannot access VG incusvg due to failed lock.)
I have already changed lvm.conf and lvmlocal.conf and here are my vg information. Can someone assist me? Regards.
``
Reading VG incusvg without a lock.
PV VG Fmt Attr PSize PFree
/dev/nvme0n1 incusvg lvm2 a-- 931.51g 931.26g
``
indiana@tnode1:~$ sudo vgchange --lockstart incusvg VG incusvg starting sanlock lockspace Starting locking. Waiting for sanlock may take a few seconds to 3 min… indiana@tnode1:~$ indiana@tnode1:~$ sudo vgs Reading VG incusvg without a lock. VG #PV #LV #SN Attr VSize VFree incusvg 1 0 0 wz–ns 931.51g 931.26g indiana@tnode1:~$ indiana@tnode1:~$ incus admin init Would you like to use clustering? (yes/no) [default=no]: yes What IP address or DNS name should be used to reach this server? [default=192.168.1.201]: Are you joining an existing cluster? (yes/no) [default=no]: What member name should be used to identify this server in the cluster? [default=tnode1]: Do you want to configure a new local storage pool? (yes/no) [default=yes]: no Do you want to configure a new remote storage pool? (yes/no) [default=no]: yes Name of the storage backend to use (truenas, lvmcluster) [default=truenas]: lvmcluster Create a new LVMCLUSTER pool? (yes/no) [default=yes]: no Name of the existing LVMCLUSTER pool or dataset: incusvg Would you like to use an existing bridge or host interface? (yes/no) [default=no]: Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: Would you like a YAML “init” preseed to be printed? (yes/no) [default=no]: Error: Failed to create storage pool “remote”: Failed to run: vgchange --addtag incus_pool incusvg: exit status 5 (Cannot access VG incusvg due to failed lock.)
I have some problems that I’m not sure it is related with lvmcluster or not. One of them is moving one instance from one node to another. For example.
incus move webserver --target=tnode2
Error: Migration operation failure: Instance move to destination failed on source: Failed migration on source: Error from migration control target: Failed creating instance on target: Volume exists in database but not on storage
What can be wrong?
Which command format is correct, I also tried like that.
indiana@tnode2:~$ incus move webserver tnode2:
Error: The remote “tnode2” doesn’t exist
The command incus move webserver --target tnode2 does not show an identical output. Tested on another hosts but the same error occurs.
indiana@tnode2:~$ incus move webserver --target tnode2Error: Migration operation failure: Instance move to destination failed on source: Failed migration on source: Error from migration control target: Failed creating instance on target: Volume exists in database but not on storage
Yeah, so there’s something very wrong with your setup.
Clustered LVM requires all systems to be connected to the same backing storage, so all servers see the same PVs, VGs and LVs. Then clustered LVM handles locking across systems to make this all work safely.
Your environment appears to have distinct storage on each server rather than shared, this can’t with with LVM cluster.
The hardware setup you have is likely better suited for Linstor or Ceph.