Custom storage volumes with content typefilesystem can usually be shared between multiple instances different cluster members. However, because the LINSTOR driver “simulates” volumes with content type filesystem by putting a file system on top of an DRBD replicated device, custom storage volumes can only be assigned to a single instance at a time.
From my understanding, Linstor provisions distributed block devices which are replicated with DRBD and it can replicate to all incus nodes with diskless satellites. Since the filesystem on top of that block device is not distributed, it can only be used by one instance at a time.
This relies on the lvmlockd and sanlock daemons to provide distributed locking over a shared disk or set of disks.
Is it possible to have both interact within incus to create a custom storage volume which can be accessed from multiple instances at the same time? Or am I missing something here?
If this is possible and hopefully desirable for Incus, might make sense to create a feature issue for this.
EDIT: To provide information for anyone finding this post in the future, know that my assumption that lvmcluster allows for concurrent access is incorrect.
With lvmcluster, you can partition a shared block device (e.g. a single SAN LUN) into separate logical volumes, each of which is itself a block device.
There’s no need for this with Linstor: you can just create additional Linstor volumes as you need them. That will in turn allocate storage from the underlying block pools (LVM or ZFS) and configure drbd replication as required.
Either way, a filesystem residing in a block device is only accessible by one client at a time. (Unless you are using a very esoteric cluster filesystem like OCFS2 or GFS)
If you want to share a custom storage volume between multiple instances which are running on different nodes, use NFS.
Btw, I also had the idea CephFS allowed for concurrent access, other than OCFS2 or GFS. In this case, already supported by incus. Do you know? Am I wrong about CephFS as well?
CephFS is similar conceptually to NFS (a shared tree of files and directories), and yes, it can be shared by multiple clients.
Whilst you can cobble together a Ceph environment pretty quickly, and Ceph is very solid, it’s also very complex. When problems occur you need to know how to deal with them. I wouldn’t suggest you use it in production without a lot of testing and learning (and/or a good support contract).
Linstor on the other hand is just LVM with DRBD replication on top. Even if it crashes, the data path is unaffected.