Create lvm pool from existing lvgroup

p10e4l ~ # wipefs /dev/dm-0 
DEVICE OFFSET TYPE UUID                                 LABEL
dm-0   0x438  ext4 0c58e64a-e4a6-4e1f-b4d2-342976be1524 
p10e4l ~ # incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=192.168.10.10]: 
Are you joining an existing cluster? (yes/no) [default=no]: 
What member name should be used to identify this server in the cluster? [default=p10e4l.luketic]: 
Do you want to configure a new local storage pool? (yes/no) [default=yes]: 
Name of the storage backend to use (dir, lvm, btrfs) [default=btrfs]: lvm
Create a new LVM pool? (yes/no) [default=yes]: no
Name of the existing LVM pool or dataset: vgraid0
Do you want to configure a new remote storage pool? (yes/no) [default=no]: 
Would you like to use an existing bridge or host interface? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: 
Error: Failed to create storage pool "local": Volume group "vgraid0" is not empty
p10e4l ~ # pvs
  PV         VG      Fmt  Attr PSize   PFree 
  /dev/sda1  vgraid0 lvm2 a--  931,51g 46,57g
  /dev/sdb1  vgraid0 lvm2 a--  931,51g 46,57g
p10e4l ~ # lvs
  LV    VG      Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol0 vgraid0 -wi-a----- <1,73t                                                    
p10e4l ~ # lvscan 
  ACTIVE            '/dev/vgraid0/lvol0' [<1,73 TiB] inherit

How do I empty the vg, so that incus accepts it?

Hi,
Have you ever try vgremove command?
Regards.

I was afraid it would remove the RAID0 setup. But I did remove the volume group.

p10e4l ~ # vgremove vgraid0 
Do you really want to remove volume group "vgraid0" containing 1 logical volumes? [y/n]: y
Do you really want to remove active logical volume vgraid0/lvol0? [y/n]: y
  Logical volume "lvol0" successfully removed.
  Volume group "vgraid0" successfully removed
p10e4l ~ # vgcreate raid0 /dev/sda1 /dev/sdb1 
  Volume group "raid0" successfully created
p10e4l ~ # vgs
  VG    #PV #LV #SN Attr   VSize  VFree 
  raid0   2   0   0 wz--n- <1,82t <1,82t
p10e4l ~ # pvs
  PV         VG    Fmt  Attr PSize   PFree  
  /dev/sda1  raid0 lvm2 a--  931,51g 931,51g
  /dev/sdb1  raid0 lvm2 a--  931,51g 931,51g
p10e4l ~ # lvs
p10e4l ~ # incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=192.168.10.10]: 
Are you joining an existing cluster? (yes/no) [default=no]: 
What member name should be used to identify this server in the cluster? [default=p10e4l.luketic]: 
Do you want to configure a new local storage pool? (yes/no) [default=yes]: 
Name of the storage backend to use (dir, lvm, btrfs) [default=btrfs]: lvm
Create a new LVM pool? (yes/no) [default=yes]: no
Name of the existing LVM pool or dataset: raid0
Do you want to configure a new remote storage pool? (yes/no) [default=no]: 
Would you like to use an existing bridge or host interface? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: 
p10e4l ~ # incus storage list
+-------+--------+-------------+---------+---------+
| NAME  | DRIVER | DESCRIPTION | USED BY |  STATE  |
+-------+--------+-------------+---------+---------+
| local | lvm    |             | 1       | CREATED |
+-------+--------+-------------+---------+---------+

For future reference, you can make incus use an existing, non-empty VG using the flag lvm.vg.force_reuse=true, e.g.

incus storage create local lvm lvm.vg.force_reuse=true source=vgraid0

incus admin init is just a shortcut for creating storage, network and profile.

1 Like