Unable to create Slack/Nebula bridge to use as overlay network for LXD containers


Disclaimer: this topic is more related to Slack Nebula than LXD.

I am currently running a few baremetal servers, which LXD on top.
Each baremetal is connected to the others via a peervpn (https://peervpn.net/) overlay network.
The overlay network interface is accessible container through a bridge: each container has 2 interfaces, eth0 connected to lxdbr0 and eth1 connected to vpnbr0.
vpnbr0 bridge is connected to the interface provided by peervpn.
A dedicated container is acting as DHCP server to allocate IP addresses in the overlay network subnet to any container sending a DHCP request through eth1

As a result, each container has a “standard” eth0 interface fed by lxdbr0, and is able to access any container on the overlay network via eth1, regardless of the baremetal server location.

I would like to migrate from peervpn to Slack Nebula using a similar setup but am unable to create a “nebulabr0” bridge that could replace the peervpn bridge:

           +---------------------+     +--------------------+
           |     Container 1     |     |   Container 2      |
           |                     |     |                    |
           |                     |     |                    |
           |                     |     |                    |
           |    eth0  eth1       |     |  eth0     eth1     |
           |       +     +       |     |    +        +      |
           +---------------------+     +--------------------+
                   |     |                  |        |
lxdbr0       +-----+------------------------+-----------------+           |                           |
                         |                           |
                         |                           |
nebulabr0    +-----------+---------------------------+--------+

I understand this may be linked to the fact that peervpn and Nebula operate on different layers (2 vs 3). I also understand the “routed nic” mode may be relevant for this use case but am unable to find relevant documentation on the topic and previous discussions on the topic (https://github.com/slackhq/nebula/issues/54 or Is there a "best" (recommended) method Routing -or- Forwarding a VPN TUN IP traffic to LXDBR0 (or custom LXD Bridge device)) leave open questions on the actual recommended technical setup.
A solution is to run a Nebula client in each container which I do not wish to do.
As mentionned above, the baremetal server should provide access to the overlay network to the containers by exposing its nebula1 interface, if this is feasible, in a similar mode to tinc/peervpn.

Has anyone managed to implement a Slack Nebula overlay network used by LXD containers?
If yes, can the setup be shared with the rest of the world?


I am not familiar with slack nebula, however I have used PeerVPN in the past.

PeerVPN provided a layer2 ‘switch’ behaviour and used ARP/NDP to ‘route’ packets.

Can you describe your current PeerVPN setup, and the issue you are having with slack nebula?

I’ve tested slack nebula, I don’t think its designed to have its interface bridged like that (as far as I’m aware).

You might be better placed using ZeroTier One and plugging the ZTxxxxx interface to a bridge and plugging the containers eth1 into that same bridge. ZT is designed to operate at layer2 or layer3 and supports multicast etc.


I think the idea for slack nebula is that every node runs it, so even the containers would have to run it to belong to the overlay. Unless they’ve change dit in the last 2 or 3 months, I’ve not checked since and its being regularly updated.


Current PeerVPN setup is the following:

  • on the host, 2 bridges (lxdbr0 aka the standard LXD bridge allowing containers to access the internet and an additional vpnbr0 bridge connected to the peervpn tap)
  • on each container, 2 interfaces (eth0 connected to lxdbr0 bridge and eth1 connected to vpnbr0, allowing access to the overlay network by each container)
    A container on the overlay network acts as a DHCP server, providing IP addresses to any container connected to the overlay network and sending DHCP requests. This allow for dynamic IP address allocation and inter-container communication regardless of their physical location.

The same setup fail when adding the nebula1 interface to a bridge. “brctl addif” won’t work.
My understanding is this is due to the fact that peervpn provide layer 2 networking (and therefore allowing ARP communication and dynamic IP allocation) whereas Slack Nebula provides layer 3 networking, which won’t allow the same features to be used.

So in the end it seems the setup will stick with n2n/tinc/peervpn rather than the new hotness Nebula…

In Nebula, if it is a layer 3 system, how do they distribute the routes for each node? Is there a away to assign each node a subnet that containers can use?

Hello Tom,

I am by no means an expert in Slack Nebula and would not know how to answer these questions.
My understanding is that Nebula can not be used to support an overlay network without explicitely declaring each node, which does not fit my use case.
If someone manages to prove me wrong I would be more than happy to try to migrate to Nebula to host the overlay network.

Hi again, I will say that ZeroTier-one will do exactly what you want, not sure how it compares to peervpn. But I’ve used it for 3 years and it’s been the best (most flexible) overlay networking tool I’ve used and that’s coming from a Network engineer. You can even run MPLS over it.

Nebula is not designed to be bridged as far as I’m aware, but you would be better asking on their github issues list for a clear answer.