Help for LXD cluster network

When running the first command get this error:
lxc network create fanbr0 bridge.mode=fan fan.underlay_subnet=10.8.8.0/24
Error: Failed to run: ip link set dev fanbr0-fan mtu 1450 up: RTNETLINK answers: Invalid argument

Are you using private networking in your host? Or using Zerotier?

No not using zerotier, just private network.

can you output your private network? ifconfig -a or something?

See the details below:

ens4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1460
        inet 10.8.8.4  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::4001:aff:fe08:804  prefixlen 64  scopeid 0x20<link>
        ether 42:01:0a:08:08:04  txqueuelen 1000  (Ethernet)
        RX packets 57681  bytes 110100133 (110.1 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 51188  bytes 3813673 (3.8 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 502  bytes 43357 (43.3 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 502  bytes 43357 (43.3 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Your netmast doens’t look correct. This way the Fan network won’t work because your CIDR netmask is 255.255.255.255 you need to change it to 255.255.0.0 which makes it cidr /16 or 255.255.255.0 for cidr /24

If you can’t change it, it might be that your cloud provider doesn’t support keeping your machines in the same subnet.

You are right the netmask is /32 since all cloud providers only support unicast address even within virtual private cloud. This is at least the only option for google cloud.
This is the reason I am not auto creating the fan and running it manually to give the ip address to create the bridge type fan.

I suggest you setup some kind of private networking.

I’ve had great luck with Zerotier it was a matter of installing zerotier joining the networking and using it as the fan network setting. It took me 5 minutes to setup.

I am using google clouds networking stack to create a virtual private network. but if you worked with it there are limitation. It does not support gre, it only allocates /32 ip address via its dhcp. I have setup my own network on top of it but the response time from 0.210ms changed to 0.570ms.
Its two times impact on network performance, so was trying to see if I can work with their networking stack without trying my own.

@zacksiri

I use zerotier but for spanning the WAN / internet (similar to dmvpn as in a routed mesh), it has encryption overhead by default, cha cha poly1305, whereas vxlan will not be encrypted.

Performance of zerotier is comparable to ipsec, albeit slightly slower as its not using a kernel module, its in userspace, but its still pretty fast.

I’m not sure if it would be suitable for cross datacentre bridging/routing as you may need more performance.

You can turn encryption off on Zerotier though if you want. Never tried that though.

Cheers!
Jon.

Hey

I’m pretty happy with it’s performance out of the box, I’m getting sub 1ms ping time. My web service is mostly responding within 22ms with the routing mesh. Pretty happy with the setup.