Router inside a vm or on the host?

I am thinking to have one of my server also acting as router using bgp in the network but wonder what people do in such case. The router wil probvodes the routes for th evm and container on this machine but also machines around. Is there any good practice for it? What about having the router itself in a vm using sr-iov for routing vs having it on the host?

Any feedback us welcome :slight_smile:

I’ve been doing the BGP routers in container thing for a while and it’s been working well for me, but I also have the extra fun that this is an Incus cluster and I want high availability, so I’m actually running a router on each of the three servers and using VRRP to make the subnets HA while having each establish BGP sessions with my main transit.

This helps a lot because I can actually shut down one or two of the routers and not lose access to my hosts. When doing this on a single server, you’re going to want some alternative way to get to the host which doesn’t involve that router as otherwise you’re likely going to be in a lot of pain should anything go wrong.

Running the routing software directly on the host is also an option for sure, especially as the likes of frr are pretty self contained so it’s not like getting it installed and keeping track of the configuration is particularly difficult. It does however mean that they’ll be running on whatever OS you run on your host and it’s one more thing to be careful about when upgrading your host.

That’s interesting. Not sure I can claim to get 3 sessions with my transits but I can certainly do such thing. Any reason why you’re using lxs instead of using srv-io or directly your cards?

I’m currently using normal VLANs and bridging for my routing containers, mostly because it works fine and is reasonably low overhead for the max 20Gb/s that it’s dealing with.

If I was dealing with 40Gb/s+ transits, I’d probably do SR-IOV but still with containers as that’s lower overhead than VMs.

For me the main downside of SR-IOV is that I’m doing LACP across two ports to my ToR switch. So doing SR-IOV I’d need to either give up on redundancy and pass a VF from one of the two cards only or do old school passive bonding using a VF from each card.

And yeah, I’m aware that some of the newer cards have a switching chip inside so can handle LACP for you on the two external ports and give you a redundant VF. Downside of that, other than needing one of those fancier cards is that you now made your card the SPOF as you can’t do that across two physical cards.


this makes sense. part of it on my side is that i will use vpp. So I figured that using sr-iov would fit this use case. I iill try and report. My favorite feature of incus actually is the ability to add dynamically a port to a vm…