Incus Cluster - Quorum only members (no workload hosting / "witness mode")?

HI folks - I’m currently evaluating a cluster strategy with two cluster members that workloads run on, and a third cluster member that explicitly acts as a database and API host. The use case here is to be able to take advantage of some of the cluster features like VM and CT migration, unified configuration, and ideally automatic healing, but without a need for 3x nodes worth of compute and RAM available to the cluster. This is a case I’ve been able to support with pretty decent success with Proxmox with the use of a Corosync device running on something like a Raspberry Pi, another system that is not a member of the compute cluster as a cluster witness. My hope is that this is feasible with Incus.

I’ve been looking at the clustering documentation a little bit here. It looks pretty straightforward to use the scheduling configuration options to prevent workloads from being scheduled on a “Witness” member, whether this be through the use of groups, or through a placement scriptlet that manages this. This is about where the answers start getting a little more opaque.

The default cluster configuration however is looking for identical / similar hardware cluster configurations between members. In the standard cluster configuration where all cluster members host both the database and workloads, this makes a lot of sense, but I’m not certain whether this is viable for the case I’m working on. Would a minimum viable disk and network layout between all cluster members, with expanded options on cluster members be viable here? Or must all disks and networks be present on all cluster members?

I also noticed that there’s a desire/expectation to distribute things like images between cluster members. There is a tunable here, but it only seems to allow me to set the replication count, and not to specify a group or exclusion of hosts. I’m not sure if there’s an advanced tuning option like specifying a destination for images on cluster members that doesn’t exist on the quorum member that could possibly work around this?

I’m not sure of any more footguns here, but any folks’ thoughts here on how to potentially achieve this kind of cluster configuration would be appreciated. If the use case isn’t feasible now, insight as to whether there is interest in supporting this case would also be interesting. Thanks!