Is there a "best" (recommended) method Routing -or- Forwarding a VPN TUN IP traffic to LXDBR0 (or custom LXD Bridge device)

Two relatively recent Open Source Mesh Overlay VPNs (WireGuard & Slack’s Nebula) only operate at Layer3 (a TUN device).

LXDBR0 as we all know is a Bridge operating at Layer2.

My use-case is to create the VPN in the Host/Server and either connect the VPN’s TEP (Tunnel End Point) or forward its traffic to LXDBR0 bridge (or a custom LXD container bridge interface).

Using a different VPN that does support a TAP Layer2 TEP (Tunnel End Point) is easy as it is simply connecting the TAP device the VPN creates as another ethernet interface to LXDBR0 (or an alternative LXD Container Bridge that I might create & use).

For VPN’s that create a TUN (Layer3) TEP I’ve so far searched for and found a number of “suggested” methods to Route -or- Forward the TUN Layer3 traffic to the LXD container bridge.

Can anyone describe what might be considered a “best” method for installing a Layer3 VPN in the Host/Server
and
either Routing -or- Forwarding traffic to/from that VPN TUN device to/from the LXD Container(s) Bridge device (LXDBR0 -or- custom Bridge).

Defining “best” probably helps. In my use-case I’d consider “best” to include:

  • ease of configuration
  • most traffic performant
1 Like

I’ve got say I’m surprised no one else had any input on this topic!

VPN, mesh or not is a big topic w lots of choices

TUN or TAP

SOCAT, or IP Tables or maybe use a Pseudowire pair of Veth between lxdbr0 and a 2nd bridge.

Or maybe install Quagga or FRR and just go with a Routing solution for VPN TUN/TAP end-points to lxdbr0?

Maybe… don’t even use a VPN and implement VXLAN and Open Virtual Switch (OVS) but then have to add IPSEC for encryption?

Maybe go with VXLAN, OVS over a mesh VPN (VPN for the encryption)?

This is what & why I did the original post!

Its a complicated subject with many possible solutions! Some more complicated in implementation or on-going operation than others.

Anyone even thinking about a Multi-tenant, Multi-Node, Multi-Cloud/Hybrid environment with LXD for Educational organizations or Business Enterprises at scale has to come up w something!

I’ve spent the last couple of months trying to understand the Positives and Negatives of implementing each of the above as well as looking at good candidates for Full-Mesh, Auto-Learning VPN application Solutions such as:

  • TINC
  • SoftEther
  • EtherTier
  • WireGuard
  • VpnCloud

I looked at linux based DMVPN, EVPN, DMEVPN and similar solutions.

So I thought I’d pose my original question to the larger LXD Community and get your thoughts, ideas and perhaps what some of you chose and why. Generate discussion.

Regarding this topic & LXD …
there are bits and pieces of information but its just spread all over the place from Blogs to Email archives to social forums like reddit.

Worse yet is that LXD itself has been evolving so rapidly that so much information on this topic found online has more or less outdated in one way or another -or- could be implemented better today than 3-5 years ago.

For such an important topic for LXD I’d jus thought it would be useful to all to gather some thoughts, ideas and experiences in one place.

I’ve gathered a ton of information related to this and would be happy to share.

Its not greatly organized except in general high level topic areas.

I may try to create a GitHub repository as a place to put & share all of that info.

Brian

1 Like

The forthcoming routed nic device type in LXD 3.19 may help with this.

It avoids the need for using a bridge entirely, and can optionally be used without specifying a parent interface.

You assign IPs statically to the device and then it sets up static routes on the host to the host side interface of the veth pair.

If you were running a routing daemon on the host you could then use those static routes to distribute them to other nodes.

2 Likes

HI Brian

I’ve found FRR/EVPN works well if you want plain old layer2 “stretched” between hosts.

But the other option is advertising the containers loopback address via OSPF (or BGP), which allows some kind of L3 ip mobility if you move it to another host it re-advertises itself and it becomes routable again.

This is where the new IP routing interface may come in handy, routing to the container and advertising a loopback into the network.

Also none of this is automated so to automate it you would have to write it Ansible/Salt or some other automation framework.

If you look at opennebula they have some evpn automated but I’ve haven’t tested it. It’s more aimed at VM’s sitting between openstack/proxmox but can also look after your containers.

Cheers,
Jon.

2 Likes

thanks Jon… its been crazy. Once I started looking for a solution there were just so many. But trying to balance each one’s Positives and Negatives against the others has been complicated.

Your recomendation so hard for understanding simple people. I would advise install standart vpn services. I would advise reading this article https://webguidevpn.com/best-vpn-for-spain/

1 Like

If you want to build an overlay to every container, so they are all on the same BIG segment, slack nebula seems the easiest so far. Not sure how will it scales as you would run it in every container, but the firewalling features are cool and easy to get your head around, I guess its similar to how the some of networking stacks in kubernetes works to build overlays between pods

Also seems very actively updated at the moment which is a good sign.

1 Like

That is awesome example. Thank you.

1 Like