Interface Aggregation (bonding) on Host vs Guest?

Hello together

Im asking myself, if I should setup a bond on the host device or inside the lxc container?

For what i want to achieve:

I want to run a container with nftables as default router/ firewall for my network.
The host should not be used as router/firewall, this should occur inside the router container

So my idea is following:

[WAN] eth0 (host) → eth0 (container)
[LAN] eth1, eth2 (host) → eth1, eth2 (container) → bond0
[OOB] eth3 (host) → Out-of-band Management of the host

Doas that make sense and will it work? Or should i stick to following setup:

[wan] eth0 (host) → eth0 (container)
[lan] eth1, eth2 (host) → bond0 (host) → eth1 (container)
[OOB] eth3 (host) → Out-of-band Management of the host

if you pass the physical interface to the container it gets removed from the host namespace thus it is not seen by the host so your diagram would look something like this:

[WAN]->eth0 (container)
[LAN]->bond0{eth1+eth2} (container)
[OOB]->eth3 (host)

Notice however in this configuration the host will not have network connectivity. To solve this issue can create a bridge on the host and a veth pair (or just the veth pair if you don’t want a bridge) connecting the container and the host so the hosts connection gets routed through the container which is your firewall/router.

                  ->veth0(veth_0) <-> (host)
1 Like

First, thanks for your reply, that explains a lot!

So I will try the solution you described and will let you know what i finally settled on.
Both Scenarios seems to be fine, i prefer the one with the least moving parts.

Are there any major differences or security concerns between the two scenarios

  1. pass all interfaces, do the whole interface configuration inside the container
  2. pass the wan int. Create the bond and the bridge on the host .
1 Like

Well if you don’t let the firewall (container) directly handle the interface(s) then you create a problem right there, in that the firewall can’t handle/make any changes to the interface(s). For example, you can’t directly apply traffic queuing disciplines (qdiscs), e.g. fq_codel or CAKE, to the interface and have to instead do it manually on the host this seems a bit counter productive to me?
There is probably some (but not much) overhead to be expected with creating more virtual interfaces atop of your physical interfaces.
As for security issues, I can’t think of any right off the bat but don’t quote me on that.

1 Like

Ah, didn’t think about that before.
So I will skip the second scenario and adopt your suggestion.

1 Like