Shared storage across multiple instances

Hello Incus Community,

We currently operate five bare metal servers, each equipped with two 500GB NVMe SSDs, and are planning to transition to an Incus cluster hosting around 20 instances distributed across these servers. In our existing setup, we maintain copies of shared files on each server and use Syncthing to synchronize changes. As we move to a production environment, we’re seeking the most streamlined and efficient method to establish a shared, redundant storage system that allows both read and write access for all Incus instances across the cluster. While exploring options, we’re concerned about Ceph’s latency and the significant network resources it requires, given our limited networking infrastructure. We would greatly appreciate any recommendations or best practices for configuring shared storage within an Incus cluster that ensures high availability and optimal performance without the overhead associated with Ceph.

Best regards,
Patrik

Currently the options are basically:

  • Ceph (requires local disks to be exported, 3 servers minimum, good networking)
  • lvmcluster (requires a shared block device, can be FC, iSCSI, NVMEoTCP, …)

We have an issue open to investigate adding Linstor as a storage option as that may provide a good in-between option, but it’s not there yet.

I suspect it also depends on whether you only care about shared files that can be accessed from all instances or about also putting the instances on it as lvmcluster for example doesn’t have a distributed filesystem layer, not sure about Linstor.