Understanding incus lvmclustered driver

Understanding lvmcluster storage driver functionality.

Assuming we have :

  1. 3 servers A, B, C
  2. single 3TB iSCSI/FC LUN from some storage server/box.
  3. target is to create 6 VMs(spread across 3 servers)

So can we share:

  1. one big 3TB iSCSI/FC across 3 servers, and split the same via 3 VGs. One VG in each server.
  2. Or we need to mandatorily split the 3TB lun into 3 x 1 TB shared Lun for each server.

So in all sharing is at physical block device level or vg level.

Nope, what you do is you make a single large PV on that 3TB block device.
You then create a shared VG on it vgcreate --shared vg0 /dev/XYZ on the first server.
And you then confirm that it’s visible on all others by running vgchange --lockstart and then vgs.

This assumes that you have lvm_lockd enabled in lvm.conf, that you have a unique local id set in lvmlocal.conf on each server, than you have lvm-lockd and sanlock installed and running (basic requirements for LVM on a shared block).

Once that’s all done, you create the lvmclustered pool in your cluster, pointing it at your vg0 VG and you’ll then have that single VG be shared across all 3 servers. You’ll see the same LV list on all servers in lvs and this will then naturally allow for fast live-migration and the like as all servers can access the exact same data.

Many thanks for your reply. So this means we need to mandatorily split 3Tb lun in storage box itself to enable run 2 VMs in each server. So that would also mean 2 VMs have to simultaneously live migrated as well, if they both share the same vg. So guidline would be one block device to one pv to one vg to one VM for independent migration of VM using lvmclustered driver. Is this understanding correct.

I don’t understand what you’re trying to do.

What I would do in your case is:

  • Single PV of 3TB for the whole cluster
  • Single VG using that single PV for the whole cluster
  • One LV per VM on the VG

All servers will see all 6 LVs because they’re seeing the exact same VG and PV.

1 Like

Got it now. Just one more query, how would multiple (say 2)LVs be handled which is mapped to single VM. Would live migration work?
Also do both LVs must be from same vg for live migration to work. Multiple VGs could be due to difference performance profile of storage system( hdd, ssd etc)

That’s actually been on my list to check on. I suspect we currently disallow live-migration for any VM that’s attached to additional disks, regardless of what pool it’s from or whether the storage is shared.

In theory it should certainly be possible to handle such live migrations and it wouldn’t matter that you have two different VGs so long as both of them are clustered. So you could totally have a hdd VG and an ssd VG.

But the fact that we also support live-migrating across non-shared storage is what’s making the handling of additional disks a bit tricky as in that situation we need to check and handle situations like having the extra disk be shared with multiple VMs, the target server potentially not having enough space, …