Nope, what you do is you make a single large PV on that 3TB block device.
You then create a shared VG on it vgcreate --shared vg0 /dev/XYZ on the first server.
And you then confirm that it’s visible on all others by running vgchange --lockstart and then vgs.
This assumes that you have lvm_lockd enabled in lvm.conf, that you have a unique local id set in lvmlocal.conf on each server, than you have lvm-lockd and sanlock installed and running (basic requirements for LVM on a shared block).
Once that’s all done, you create the lvmclustered pool in your cluster, pointing it at your vg0 VG and you’ll then have that single VG be shared across all 3 servers. You’ll see the same LV list on all servers in lvs and this will then naturally allow for fast live-migration and the like as all servers can access the exact same data.
Many thanks for your reply. So this means we need to mandatorily split 3Tb lun in storage box itself to enable run 2 VMs in each server. So that would also mean 2 VMs have to simultaneously live migrated as well, if they both share the same vg. So guidline would be one block device to one pv to one vg to one VM for independent migration of VM using lvmclustered driver. Is this understanding correct.
Got it now. Just one more query, how would multiple (say 2)LVs be handled which is mapped to single VM. Would live migration work?
Also do both LVs must be from same vg for live migration to work. Multiple VGs could be due to difference performance profile of storage system( hdd, ssd etc)
That’s actually been on my list to check on. I suspect we currently disallow live-migration for any VM that’s attached to additional disks, regardless of what pool it’s from or whether the storage is shared.
In theory it should certainly be possible to handle such live migrations and it wouldn’t matter that you have two different VGs so long as both of them are clustered. So you could totally have a hdd VG and an ssd VG.
But the fact that we also support live-migrating across non-shared storage is what’s making the handling of additional disks a bit tricky as in that situation we need to check and handle situations like having the extra disk be shared with multiple VMs, the target server potentially not having enough space, …