I have a 3 host LXD cluster, with no container networking and a default profile, Snap LXD 3.10
I want to create a network where all containers can talk/ping each other by name or ip address.
stev@de-db01:~$ sudo lxc network list
±-------±---------±--------±------------±--------±------+
| NAME | TYPE | MANAGED | DESCRIPTION | USED BY | STATE |
±-------±---------±--------±------------±--------±------+
| ens160 | physical | NO | | 0 | |
±-------±---------±--------±------------±--------±------+
I issued the command to prepare the 3 hosts:
stev@de-db01:~$ sudo lxc network create fan0 --target=de-db01
Network fan0 pending on member de-db01
stev@de-db01:~$ sudo lxc network create fan0 --target=de-db02
Network fan0 pending on member de-db02
stev@de-db01:~$ sudo lxc network create fan0 --target=de-db03
Network fan0 pending on member de-db03
stev@de-db01:~$ sudo lxc network create fan0 bridge.mode=fan
Network fan0 created
stev@de-db01:~$ sudo lxc network attach-profile fan0 default
stev@de-db01:~$ sudo lxc network list
±-------±---------±--------±------------±--------±--------+
| NAME | TYPE | MANAGED | DESCRIPTION | USED BY | STATE |
±-------±---------±--------±------------±--------±--------+
| ens160 | physical | NO | | 0 | |
±-------±---------±--------±------------±--------±--------+
| fan0 | bridge | YES | | 0 | CREATED |
±-------±---------±--------±------------±--------±--------+
Now before DNS and the containers ip address get fixed (reboots) the hosts should be able to communicate on the fan network 240.X.0.1 where X is the address of the server on the “real” network?
ip a s shows 3 new fan network adaptors (fan0 fan0-mtu and fan0-fan) and the routes have been added to the routing table:
stev@de-db01:~$ ip r s
default via 192.168.81.254 dev ens160 proto static
192.168.81.0/24 dev ens160 proto kernel scope link src 192.168.81.4
240.0.0.0/8 dev fan0 proto kernel scope link src 240.4.0.1
pings to de-db02 work via this underlay network… good
stev@de-db01:~$ ping 240.9.0.1
PING 240.9.0.1 (240.9.0.1) 56(84) bytes of data.
64 bytes from 240.9.0.1: icmp_seq=1 ttl=64 time=0.551 ms
64 bytes from 240.9.0.1: icmp_seq=2 ttl=64 time=0.321 ms
64 bytes from 240.9.0.1: icmp_seq=3 ttl=64 time=0.333 ms
But pings to de-db03 fail
stev@de-db01:~$ ping 240.22.0.1
PING 240.22.0.1 (240.22.0.1) 56(84) bytes of data.
From 240.4.0.1 icmp_seq=1 Destination Host Unreachable
From 240.4.0.1 icmp_seq=2 Destination Host Unreachable
From 240.4.0.1 icmp_seq=3 Destination Host Unreachable
From 240.4.0.1 icmp_seq=4 Destination Host Unreachable
stev@de-db01:~$ arp
Address HWtype HWaddress Flags Mask Iface
240.9.0.1 ether 7e:10:67:39:68:7a C fan0
240.22.0.1 (incomplete) fan0
…
when looking at the other cluster members
de-db02 works fine to both de-db01 and de-db03, arp table full.
but de-db03 is the same as de-db01 in that its all setup with network adaptors, but can’t ping and doesn’t have a mac address for de-db01 in its arp table.
If I reboot everything then then the containers get ip address and the containers on de-db02 can ping the containers on de-db01 and de-db03 and its-self, but containers on de-db01 can’t ping containers on de-db03 and vice-versa, of course.
So is this me doing it wrong or is it a bug? Is there anything further i should do/try?