Simultaneous connection to dual network (1 standard, 1 192.168.x.x VPN) from containers

Hello,

We have built a working overlay L2 network using PeerVPN (https://peervpn.net/). This mesh VPN is deployed over a few bare metal servers running LXD.

We would like containers to see 2 NICs upon creation, the goal being for the container to be able to access both

  • the internet, to expose services to the outside world
  • the VPN, to securely communicate with containers hosted on other LXD servers via PeerVPN.

On each bare metal server, LXD is running the default lxdbr0 bridge in NAT mode, in addition to the tap0 virtual interface managed by PeerVPN. The tap0 interface is allocated a private IP address (192.168.X.X/32) via PeerVPN.

Each host is able to ping their peers the public IP or via the private VPN IP. Routes seems normal: default to eth0 and tap0 to access 192.168 networks.

Each container is able to access the internet and publish information to the outside world thanks to lxdbr0 and iptables filtering.

What won’t work despite various tweaks (“brctl addif lxdbr0 tap0”, iptables mangling and route tweaking) is the following:

  • no container can ping the IP address of the local tap0 IP address
  • no container can ping the IP address of another baremetal server.
    We could not go any further to try and deploy a DHCP container on the VPN as they won’t be able to request or allocate IP addresses. Allocating fixed 192.168.X.X/32 IP address to eth1 from within the container and adjusting routes won’t solve the problem.

Ideally we would need each container to have 2 NICs eth0 and eth1:

  • eth0 bridged onto “bare metal eth0”, which seems to be lxdbr0 default behaviour
  • eth1 bridged onto tap0, to access the VPN over which a DHCP server would be activated.

A “fallback” would be to have only 1 NIC within each container, with the routing between VPN and internet happening on the host, but that would mean extra NATing for internal VPN communications which seem pointless.

I understand this is bordeline related to LXD. Previous discussions on similar topic such as How to add a network interface in lxc? won’t provide a working answer so any help is appreciated.

Regards,
D.

don’t know if you are still looking at this but I’d done this quite a while ago and wrote up a post here but I think the general idea would work the same for LXD as for LXC:

1 Like

Also have a look at the new Slack Nebula, it seems quite feature rich for building overlays. Also new features coming. Seems similar to peervpn.

If you ran it on every container you could have an overlay stretched to every container and they would be on the same layer2 network.

With regards to two interfaces, if you used nebula, you would end up with eth0 the default interface connecting to the outside world and nebula0 which would be the new interface connected to the overlay.

In the case of something like Zerotier, you would have eth0 as your default interface and an interface called something along the lines of ztks5yas3r as the overlay interface.

No need to bridge any interfaces in the two above cases (nebula / zerotier)

Alternatively look into EVPN, bgp between hosts, vxlan and bridging lxdbr0 to the vxlan1 interface. EVPN is built into Free Range Routing and is basically open source tools, BGP, vxlan etc. Bit more tricky to setup.

Hi

Please can you send output of lxc network show lxdbr0 and the output of lxc config show c1 --expanded where c1 is one of your containers.

One problem I can see straight away is that you won’t be able to have an IP on the tap0 interface if you’re connecting it to the lxdbr0 bridge. Instead you’d need to make sure that the lxdbr0’s network settings in LXD specifies an IP in the subnet that is the same as the peerVPN config.

In short, are you trying to span the same lxdbr0 subnet across peervpn at the layer2 level?

Thanks
Tom