I have a three member Incus cluster with an OVN network on top of uplink network. Pretty much constructed by following the documentation), for the most part works great. Unfortunately, my bare-metal cloud provider requires to specify what IP addresses are going to be used on each server (static routes used on their side), or establish a BGP connections to their route reflectors and export prefixes used.
I modified Incus and uplink network settings for BGP accordingly, sessions are established and well, but I can’t seem to find how I could export prefixes used for child OVN network routers (anything that’s in ipv4.ovn.ranges). Is there even a way to accomplish this on Incus itself?
Hmm, I’m a bit confused by what’s going on here.
Basically you have an uplink subnet, say 10.100.100.0/24 with 10.100.100.1 being some kind of external router. That uplink network is available on each host as a network interface or VLAN, lets say eth0.100 (VLAN 100 on eth0).
You then pass that to Incus creating an uplink network using eth0.100 on each server, then configure ipv4.ovn.ranges to say 10.100.100.100-10.100.100.254 so OVN can allocate 154 virtual routers on rthere.
Then you can create a bunch of networks on OVN networks using that uplink, if you disable NAT on those, you’ll need their subnet to be routed to the corresponding 10.100.100.X address from the external gateway. That’s where our BGP support helps as it will advertise your 10.x.y.0/24 network with a next-hop through 10.100.100.X so the traffic can make its way into Incus.
I’m unclear about what else you need to advertise to your provider, unless the provider isn’t the one in charge of the 10.100.100.1 gateway in this example, but if they’re not, how’s your uplink network working?
I have to admit I have misinterpreted the symptoms. It turns out that until addresses are not specified in the cloud provider’s portal (or BGP support is not activated), the traffic is being blocked at the access switch level.