BGP inside OVN network

Hello,

I have an OVN network managed by Incus. This network is connected to a physical UPLINK. This physical network is configured to send routes via BGP to my router.

You can find the configuration below:

incusovn:

config:
  bridge.mtu: "1500"
  ipv4.address: 10.100.1.1/24
  ipv4.nat: "false"
  ipv6.address: fd12:3456:7890:1::1/64
  ipv6.nat: "false"
  network: UPLINK

UPLINK

config:
  bgp.peers.router.address: 10.10.10.1
  bgp.peers.router.asn: "65000"
  dns.nameservers: 10.10.10.1
  ipv4.gateway: 10.10.10.1/24
  ipv4.ovn.ranges: 10.10.10.100-10.10.10.100
  ipv4.routes: 10.100.1.0/24,10.100.254.0/24,10.100.200.0/2
  ovn.ingress_mode: routed
  parent: br1010

With my configuration instances are launched in 10.100.1.0/24 CIDR.

I run a Kubernetes cluster inside VMs and this cluster expose load balancer on 10.200.1.0/24 CIDR. I tried to ask my cluster to do BGP with my router but it does not work as there is no direct connection between them (there is the ovn network between), so it does not work. What I have done is to declare the 10.200 CIDR on ipv4.routes that way route is announced and on Kubernetes I use virtual IP to correctly responds.

My question is: is there a way to do BGP between VMs and OVN router and then the OVN router send routes to Incus which propagates them to the router ?

Not really as there is no such thing as an OVN router, it’s all just some flow rules being distributed to a bunch of OpenVswitch switches. You can do quite a bit with those rules (like fake DNS and DHCP as OVN does), but something as complex as BGP, not so much.

Incus’ BGP server also can’t really handle that as Incus itself also doesn’t have an address directly in your virtual network nor would it be the gateway for it anyway.

I think your best bet may be for Kubernetes in your VM to establish a BGP session directly with your router and use eBGP multi-hop to allow that session. Assuming a plausible next-hop is advertised over that session, it should lead to your router sending that traffic towards the OVN router.

The problem then will be for the OVN router to know where to send that traffic as without an ipv4.routes in place, it won’t really know what to do with it.

So if I manage to do eBGP between my cluster and my router, I still need ipv4.routes that will be announce by Incus if I understand correctly.

So I probably need to blacklist 10.200 announcement from Incus on my router too

Sorry to dredge this up… What did you end up doing @guillomep ?

I am faced with the same challenge.

I didn’t have time to check more on this, since it was not easy to do.

I guess I am leaning more towards just extending multiple VLAN into my hosts and doing this on my Nexus, leaving OVN just for traditional VM or system container workloads. My main goal was if I could install the BGP relation with OVN to the cluster at creation time then everything else is in gitops. that doesn’t seem to work. OVN has to be managed and Nexus still needs to peer with Cilium.

If im doing that might as well bypass the negative side effects of OVN if my tenant boundary is leaking anyways