Bridged networking best practices

I’ve got an architectural question and a related config management question: how are folks handling setups with lots of VLANs?

I found this topic but the main difference here is that I might be able to utilize SR-IOV but unclear how that would work with my LACP port channels (or maybe I don’t even need LACP?). And of course These servers are the only ones I have with SR-IOV support and I am worried activating that will brick my only existing uplink to them. Possibly this topic has the same exact question, in general ‘what should I do’. I am confused how this is not a bigger question. Obviously I figured out how to get it to work but I am concerned that I might be using the wrong approach.

I think I need to step back here. I’m realizing OVN’s routing isn’t quite good enough to work with my downstream BGP peers and clusters, so I’m going to have to start bridging more VLANs directly into VMs. I’m thinking maybe I should go the SR-IOV route with NIC VFs, because this netplan config is turning into spaghetti. For context: VLAN 113 is my OVN uplink and VLAN 111 is the OVN underlay, which basically isn’t going to get used (I obscured my IPv6 and TLD)

network:
  version: 2
  ethernets:
    enp59s0f0np0: {}
    enp59s0f1np1: {}
  bonds:
    bond0:
      interfaces:
      - enp59s0f0np0
      - enp59s0f1np1
      parameters:
        mode: "802.3ad"
        lacp-rate: "slow"
        transmit-hash-policy: "layer3+4"
  bridges:
    br200:
      interfaces:
        - bond0.200
      parameters:
        stp: false
        forward-delay: 0
      dhcp4: no
      dhcp6: no
      accept-ra: no
    br112:
      interfaces:
        - bond0.112
      parameters:
        stp: false
        forward-delay: 0
      dhcp4: no
      dhcp6: no
      accept-ra: no
    br202:
      interfaces:
        - bond0.202
      parameters:
        stp: false
        forward-delay: 0
      dhcp4: no
      dhcp6: no
      accept-ra: no
    br110:
      interfaces:
        - bond0.110
      parameters:
        stp: false
        forward-delay: 0
      dhcp4: no
      dhcp6: no
      accept-ra: no
  vlans:
    bond0.101:
      addresses:
      - "10.36.101.11/24"
      id: 101
      link: "bond0"
    bond0.202:
      id: 202
      link: "bond0"
    bond0.112:
      id: 112
      link: "bond0"
    bond0.200:
      id: 200
      link: "bond0"
    bond0.113:
      id: 113
      link: "bond0"
      accept-ra: no
      dhcp6: no
      link-local: []
    bond0.110:
      id: 110
      link: "bond0"
      accept-ra: no
      dhcp6: no
      link-local: []
    bond0.111:
      addresses:
      - "10.36.111.11/24"
      id: 111
      link: "bond0"
    bond0.201:
      addresses:
      - "10.36.201.11/24"
      - "2001:db8:0:c002::11/64"
      nameservers:
        addresses:
        - 2001:db8:0:2::1
        search:
        - example.com
      routes:
      - to: "default"
        via: "10.36.201.1"
      - to: "::/0"
        via: "2001:db8:0:c002::1"
      id: 201
      link: "bond0"

Am I going insane here? For a “traditional” networking workflow, is there a better way to add VLANs to the trunks hitting each server than running an Ansible playbook to generate the netplan and creating a profile for each VLAN? Here’s how I attach a VM to a specific VLAN today:

name: bridged110
description: ''
devices:
  eth0:
    nictype: bridged
    parent: br110
    type: nic
config: {}
access_entitlements:
  - can_delete
  - can_edit
project: default

I’m open to virtual functions or any other tips.

Obviously it would be fantastic if OVN could peer with my downstream BGP peers, advertise dynamic routes to the networks it can reach, and install those in its routing table. But for now OVN isn’t viable, and I want to make sure I have a solid plan before going all-in on bridging to my VLAN networks.

Thanks in advance!