LXD cluster with fan network and wireguard

Hello.

I have set up a three node cluster on a VPS for playing around. I want these to be on a LXD cluster. I am using ubuntu 18.04 with lxd 3.5. I have set up a VPN using wireguard. So I have a private network on wg0 and I can contact all of my nodes on this network.

I created a cluster, made my nodes join, and finally added a fan bridge (fanbr0) network which I have attached to containers as eth0. I can create containers which get correctly scheduled in all of the three nodes. I can exec each one of those and ping containers on any ones of the nodes.

However that’s where my luck ends. If I try to do any other networking I fail completely if the containers are on different nodes.

  • I can’t use the lxd names (i.e. I can’t ping container-name.lxd which I understand should work since 3.4)
  • I have tried setting up a HTTP server on one container in one host, and tried to run a HTTP GET on one container in another host. It just hangs.
  • I can’t traceroute, but I can traceroute -T.

I thought I was having trouble with my firewall (ufw) so I enabled on all hosts incoming traffic through fanbr0. That didn’t help.

I feel I am close to success (being able to contact my containers on all nodes) but I am missing something (potentially simple). Can anyone give me a hint?

I think @stgraber is in the best position to answer here, given he’s the one that added fan networking support.

PS: Re “that’s where my luck ends” it’s actually nice to see that you got this far without any hassles. Stephane, perhaps there’s some feature/logic that we might add to make this last mile even easier?

Can you show ip -4 addr show and ip -4 route show on all servers?

My guess is that the fan got configured to use your main network interface (it looks for the default gateway by default) and so isn’t using your wireguard interface for host to host communication.

Hello,

Thanks for the answers.

Reduced my cluster to 2 nodes.

lxd-1:

root@lxd-1 ~ # ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet PUBLIC_IP_CENSORED/32 brd PUBLIC_IP_CENSORED scope global eth0
valid_lft forever preferred_lft forever
6: fanbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1370 qdisc noqueue state UP group default qlen 1000
inet 240.1.1.1/8 scope global fanbr0
valid_lft forever preferred_lft forever
13: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
inet 10.0.1.1/16 scope global wg0
valid_lft forever preferred_lft forever

root@lxd-1 ~ # ip -4 route
default via 172.31.1.1 dev eth0
10.0.0.0/16 dev wg0 proto kernel scope link src 10.0.1.1
172.31.1.1 dev eth0 scope link
240.0.0.0/8 dev fanbr0 proto kernel scope link src 240.1.1.1


lxd-2:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet PUBLIC_IP_CENSORED/32 brd PUBLIC_IP_CENSORED scope global eth0
valid_lft forever preferred_lft forever
12: fanbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1370 qdisc noqueue state UP group default qlen 1000
inet 240.1.2.1/8 scope global fanbr0
valid_lft forever preferred_lft forever
22: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
inet 10.0.1.2/16 scope global wg0
valid_lft forever preferred_lft forever

root@lxd-2 ~ # ip -4 route show
default via 172.31.1.1 dev eth0
10.0.0.0/16 dev wg0 proto kernel scope link src 10.0.1.2
172.31.1.1 dev eth0 scope link
240.0.0.0/8 dev fanbr0 proto kernel scope link src 240.1.2.1

I created my fanbr0 by doing:

lxc network create fanbr0 bridge.mode=fan fan.underlay_subnet=10.0.0.0/16

I just tested with wireguard and I can ping the other node over fan but no connectivity, However trying the same thing with Zerotier backed underlay works fine for me?

Does the fan network use vxlan multicast? If so maybe something is being blocked here, multicast works with ZT out of the box, but im not sure if it works with WG as WG is just a p2p tunnel. I guess you could manually set the vxlan tunnel endpoints though. Not had much experience with WG only started messing with it on Friday.

https://lists.zx2c4.com/pipermail/wireguard/2016-December/000814.html
https://lists.zx2c4.com/pipermail/wireguard/2017-December/002169.html

I can confirm it works with zerotier as well, which might confirm bodleytunes’ multicast theory. Which would be a shame though. Is there anything we can do? Perhaps we are just missing some sort of routing rule or daemon (bird, quagga?)

Are there alternatives to zerotier that we know that work? I would like to avoid massive setups (which is a plus in wireguard and zerotier) but I am much less confident relying on zerotier.

1 Like

For anyone stumbling over this looking for a solution:

Add the fan overlay IP address for each host to the AllowedIPs block in your wireguard config, works even without multicast.

For example:

[Peer]
PublicKey = <pubkey>
AllowedIPs = <wireguard_ip/32>, <fan_addr_ip/32>
Endpoint = <lxd1:51820>

Where the wireguard IP is the underlay ip and fan addr ip is the overlay ip

3 Likes