Storage from cluster

I’m probably missing something, missing something, or not an obvious nuance.
I have lvm-thin created on each machine. I have allocated the necessary volume for them. On the cluster leader I line according to the instructions looks like this

incus storage create h-pool lvm lvm.vg_name=vg0 lvm.thinpool_name=h4-pool --target=h04

I see that on each server the pool is in pending state and the last line is

incus storage create h-pool lvm

I think there should be more to it, but I couldn’t find it in the documentation.

Error: A volume group already exists called "vg0"

What am I doing wrong?

Try replacing lvm.vg_name=vg0 with source=vg0 as I would expect the source-less behavior to be Incus trying to create a new loop-file backed pool called vg0, causing that error.

Error: Volume group “vg0” is not empty

Unless you specify that it’s a thin pool. Like this:

incus storage create h-pool lvm -target=h03 source = /dev/vg0/pool

then the behavior will be exactly like this. It will write about vgcreate error

If it is not a cluster, then I just add this line to the profile and everything works without problems. But here 3 lines are correct, I specified that it is a thin pool. But when the last command is executed it does not understand something, I only specify the lvm driver.

When I connected the servers to the master, then on each server appeared vg h4-pool and in it lv incusThinPool. There is no such vg on servers of course. As soon as I removed vl incusThinPool vg disappeared.

Now h-pool is visible on all servers, but incus storage list say state is errored

I don’t have an empty vg to give to incus. In my vg0 there is a vg0/root and a logic volume for lxc containers that I plan to move to incus. I can’t specify lvm.vg.force_reuse=true when creating a partition in the cluster and I don’t have an empty vg.

root@cluster01:~# pvs
  PV           VG  Fmt  Attr PSize   PFree   
  /dev/nvme0n1 vg0 lvm2 a--  <10.00g 1004.00m
root@cluster01:~# vgs
  VG  #PV #LV #SN Attr   VSize   VFree   
  vg0   1   2   0 wz--n- <10.00g 1004.00m
root@cluster01:~# lvs
  LV      VG  Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  my-thin vg0 twi-a-tz-- 8.00g             0.00   10.79                           
  root    vg0 -wi-a----- 1.00g                                                    
root@cluster01:~# incus storage create foo lvm source=vg0 lvm.thinpool_name=my-thin --target cluster01
Storage pool foo pending on member cluster01
root@cluster01:~# incus storage create foo lvm source=vg0 lvm.thinpool_name=my-thin --target cluster02
Storage pool foo pending on member cluster02
root@cluster01:~# incus storage create foo lvm lvm.vg.force_reuse=true
Storage pool foo created
root@cluster01:~# pvs
  PV           VG  Fmt  Attr PSize   PFree   
  /dev/nvme0n1 vg0 lvm2 a--  <10.00g 1004.00m
root@cluster01:~# vgs
  VG  #PV #LV #SN Attr   VSize   VFree   
  vg0   1   2   0 wz--n- <10.00g 1004.00m
root@cluster01:~# lvs
  LV      VG  Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  my-thin vg0 twi-a-tz-- 8.00g             0.00   10.79                           
  root    vg0 -wi-a----- 1.00g                                                    
root@cluster01:~# 

Thank you very much. I apologize for boring you in some places with silly but not obvious questions. :blush:

No worries, i agree that this particular key really should be moved to the server-specific keys.

During the day I tried to connect the storage, but I didn’t realize to add a key as on the screen after reading the documentation. I disassembled the cluster, and when adding to it I was asked to specify the pool, which I did. incus storage show my-pool showed parameters that were not accepted by the cluster.
config:
lvm.use_thinpool: “true”
lvm.vg.force_reuse: “true”

Here’s another way of looking at it. Your solution is certainly simpler.