Best practice to attach LXD containers in a BGP EVPN VXLAN and VRF environment


as we have a running BGP EVPN with VXLAN and VRF environment, with symmetric routing on the hosts, I am curious about what could be the best option to directly attach LXD containers to the L3 VNIs / VRFs, to keep the network setup as simple as possible, which makes the life a lot easier if problems occur.

We distribute EVPN route 5 types, and for normal interfaces we simply place them in the VRF and everything works, so e.g. placing an interface with an IP in the VRF makes it accessible from the other hosts.

As networking has evolved a lot in LXD since starting using it with version 2.0, it would be interesting, which options or best practices would be recommended for a setup, to attach LXD containers to a network like ours, and we would prefer the directly attached approach, if possible.

Thanks for any answers in advance and best regards,



I’m not very familiar with BGP and VRF, however you may have some success using either the routed or p2p NIC types.

These both create a veth pair between the container and the host, and in the case of the routed NIC will add a single-address static route on the host side in the routing table of your choice.

The p2p NIC just creates an unconfigured veth pair between container and host, so you can then do with that what you need.


Interesting all this!

I’ve not done this in production but as a thought, it should be possible to extend evpn right up to the lxd host server(s) by using FRR free range routing on the LXD host servers and they can participate in in the evpn vxlan overlay. A combo of FRR to peer ibgp with the upstream loopbacks of the evpn fabric (network devices) and should learn/advertise the mac/ip routes from the net devices and also advertise what is has on its local linux bridges (ie vms and lxd containers etc).

The vxlan can be done with linux bridge, basically bridge an lxd bridge to the vxlan bridge and it should e part of the overlay.

This is all theory, I do have really basic evpn running between Server hosts in hetzner and my haproxy LB’s are in the evpn overlay network peering with anycast gateways (linux bridges with same macs) so basically the same network stretched between the servers, but all this should apply to real physical hardware as FRR can just peer with outside devices if needed.

The whole idea of evpn is that its an open standard.

The more boring/standard way would be to just plumb the lxd host into a vlan on your ToR which is part of the evpn so gets Vtep and bridged into evpn overlay by the network devices.

Also might be worth having a look at how things like openstack do this as pretty sure contrail cloud/tungsten fabric do something where they can extend the evpn fabric up to the openstack hosts, so its the same principle.

Awesome, definitely going to have a closer look and test.
Thank you very much!

Now that LXD 4.18 has native support for BGP, are there any info/best practices for BGP-to-the-host and EVPN-to-the-host use cases? Usually this is with FRR already configured on the host.

Anyone using this?

I don’t personally have a use for EVPN so have only used LXD’s BGP feature to dynamically push routes to both my home gateway and my top of rack switch/router in the datacenter with one environment using normal LXD bridging and the other using OVN.

But for EVPN, I suspect you could have LXD provide BGP on a loopback address, run FRR on the host, have tha peer with LXD to get all the routes and then peer with your other peers along with the EVPN setup.