Hello, it’s quite an old subject, but i’m facing a little problem, that’s somehow related :
I’m running for test purposes only an ‘extra-stretched’ cluster :
- Two nodes are located in the south of France
- One node is located in Germany
- One node is located in French Guyana
API and UI response time have become quite bad !
I’m using Incus over Wireguard (hosted on a VPS located in Paris) for tls connection between nodes.
When database-leader role is located on one of the EU servers, everything works fine. Tho, with around 30ms response time, it’s not as responsive than in a LAN setup, but it works pretty well.
When database-leader role is located on the Overseas server, any command issued at cluster level, is sloooooow !
‘incus cluster list’ takes almost 30 seconds to output
‘incus list’ shows instances located on other nodes in “error” status most of the time
I bieleve it’s not the intended use-case for Incus, and stretched clusters requirements on any other hypervisor include 1Gb / 10Gb L2 node-to-node to work properly, with latency <10ms to stay responsive… it’s not a big deal, and probably not a problem at all 
Tho, as API stays responsive, or is at least “more responsive” when Database-Leader is located on euro host (that’s close to most of other hosts), would it be possible to mark an host to disable it to be part of the database process ?
If not, would it be possible (or does it already exists) to have some kind of priority in database election process to avoid as much as possible a node to become database leader when there are still enough hosts to maintain a corum ?
It’s not a big problem, if nothing is possible regarding this little issue, i’ll just evacuate this node, and use it as a stand-alone node, alongside the ‘euro cluster’, as long as i can move, and export nodes inbetween them, i’ll be ok 
Thanks in advance for any help provided,