LVM lxc storage delete xxx removes LVM volumegroup by default?

Is the default behavior for lxd to delete the volumegroup if storage is deleted? I would assume lxd would remove the thin root volume and leave the vg alone.

The following two examples shows that regardless if LVM is managed by lxd internal/external tools, removing the storage remove the volumegroup. What this mean is that it is quite dangerous for lxd to share a VG with anything else.

snap set lxd lvm.external=false
systemctl reload snap.lxd.daemon
vgcreate vg /dev/md127
lvcreate -L 10G --chunksize 64k --thinpool lxd vg
vgscan (got vg)
lxc storage create default lvm lvm.vg.force_reuse=true source=vg lvm.thinpool_name=lxd
lxc storage delete default
vgscan (empty result)
snap set lxd lvm.external=true
systemctl reload snap.lxd.daemon
vgcreate vg /dev/md127
lvcreate -L 10G --chunksize 64k --thinpool lxd vg
vgscan (got vg)
lxc storage create default lvm lvm.vg.force_reuse=true source=vg lvm.thinpool_name=lxd
lxc storage delete default
vgscan  (empty result)

Hmm, yeah, that’s not ideal.

@tomp can you fix that one?

When we’re the ones creating the VG, we should delete it on storage pool delete, but when using an existing VG, we should just delete the thinpool on delete.

@Diegmontoya

There are already safety mechanisms in place to avoid deleting logical volumes that exist in a volume group or thin pool that LXD doesn’t know about when it is deleting a storage pool. This logic applies irrespective as to whether lvm.vg.force_reuse was used.

E.g.

An existing LVM volume group containing a logical volume, a thin pool called LXDThinPool and a thin volume.

sudo lvs
  LV          VG  Attr       LSize  Pool        Origin Data%  Meta%  Move Log Cpy%Sync Convert
  LXDThinPool lvm twi-aotz-- 16.55g                    0.00   1.58                            
  test        lvm -wi-a-----  1.00g                                                           
  test2       lvm Vwi-a-tz--  1.00g LXDThinPool        0.00         

Create a new LXD storage pool with thinpool enabled on the existing volume group:

lxc storage create lvm lvm lvm.vg.force_reuse=true source=lvm

Now delete the storage pool:

lxc storage delete lvm

Check the existing items haven’t been removed:

sudo lvs
  LV          VG  Attr       LSize  Pool        Origin Data%  Meta%  Move Log Cpy%Sync Convert
  LXDThinPool lvm twi-aotz-- 16.55g                    0.00   1.58                            
  test        lvm -wi-a-----  1.00g                                                           
  test2       lvm Vwi-a-tz--  1.00g LXDThinPool        0.00                          

The current logic can be seen here:

However as the lvm.vg.force_reuse flag was added after the deletion protection logical was added, it has not been considered whether that flag should influence how the storage pool deletion logic should work.

@stgraber so to confirm the specific new rules we should add:

When deleting an LVM storage pool:

  1. If lvm.vg.force_reuse=true and lvm.use_thinpool is enabled then only delete the thinpool volume (if empty) and not the volume group.
  2. If lvm.vg.force_reuse=true and lvm.use_thinpool is disabled then delete the volume group (if empty).

In the second case, I’d keep the VG around even if empty as we didn’t create it ourselves.

1 Like

The lvm.vg.force_reuse setting also allows reuse of the LVM thinpool if it exists, so would it make more sense to just skip all removal of both thinpool and VG (as we can’t say for certain that LXD created the thinpool either)?

Yeah, probably easiest.

1 Like

PR here: