External network inaccessible from container after 4.7 upgrade

Everything worked fine but as soon as node upgraded to 4.7 all containers cannot access outgoing connection to external interface.

Recently the cluster upgraded to 4.7 and external network which connects to internet is inaccessible from containers. Cluster use fan networking. It is a cluster on digital ocean which provides 2 network interface one for internal and another for external. From container all the internal address are accessible but outside network is not accessible.

# from container
$ ip route
default via 240.8.2.1 dev eth0 proto dhcp src 240.8.2.98 metric 100
240.0.0.0/8 dev eth0 proto kernel scope link src 240.8.2.98
240.8.2.1 dev eth0 proto dhcp scope link src 240.8.2.98 metric 10

$ ping 209.97.160.1
PING 209.97.160.1 (209.97.160.1) 56(84) bytes of data.
^C
--- 209.97.160.1 ping statistics ---
8 packets transmitted, 0 received, 100% packet loss, time 7158ms

$ ping 209.97.166.50
PING 209.97.166.50 (209.97.166.50) 56(84) bytes of data.
64 bytes from 209.97.166.50: icmp_seq=1 ttl=64 time=0.027 ms
64 bytes from 209.97.166.50: icmp_seq=2 ttl=64 time=0.057 ms
64 bytes from 209.97.166.50: icmp_seq=3 ttl=64 time=0.057 ms
64 bytes from 209.97.166.50: icmp_seq=4 ttl=64 time=0.040 ms

$ ping 10.88.8.2
PING 10.88.8.2 (10.88.8.2) 56(84) bytes of data.
64 bytes from 10.88.8.2: icmp_seq=1 ttl=64 time=0.025 ms
64 bytes from 10.88.8.2: icmp_seq=2 ttl=64 time=0.046 ms



#from droplet
$ ip route show
default via 209.97.160.1 dev eth0 proto static
10.15.0.0/16 dev eth0 proto kernel scope link src 10.15.0.5
10.88.8.0/24 dev eth1 proto kernel scope link src 10.88.8.2
209.97.160.0/20 dev eth0 proto kernel scope link src 209.97.166.50
240.0.0.0/8 dev lxdfan0 proto kernel scope link src 240.8.2.1

$ ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 02:1d:68:bf:34:3a brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 76:3d:d3:f3:ca:e4 brd ff:ff:ff:ff:ff:ff
4: lxdfan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 00:16:3e:33:75:aa brd ff:ff:ff:ff:ff:ff
5: lxdfan0-mtu: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1450 qdisc noqueue master lxdfan0 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 2e:ce:01:e2:5c:7a brd ff:ff:ff:ff:ff:ff
6: lxdfan0-fan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master lxdfan0 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 22:fd:ed:a3:a9:a5 brd ff:ff:ff:ff:ff:ff
8: vethbf98208f@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master lxdfan0 state UP mode DEFAULT group default qlen 1000
    link/ether 16:5a:09:91:4f:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
10: veth4d211209@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master lxdfan0 state UP mode DEFAULT group default qlen 1000
    link/ether 96:10:ad:66:83:10 brd ff:ff:ff:ff:ff:ff link-netnsid 1
12: veth64e6b16b@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master lxdfan0 state UP mode DEFAULT group default qlen 1000
    link/ether 62:2d:bf:a7:b2:7e brd ff:ff:ff:ff:ff:ff link-netnsid 3
14: veth84af0c1c@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master lxdfan0 state UP mode DEFAULT group default qlen 1000
    link/ether de:8c:08:1d:e4:76 brd ff:ff:ff:ff:ff:ff link-netnsid 4
16: vethd6ee42e0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master lxdfan0 state UP mode DEFAULT group default qlen 1000
    link/ether b2:5a:96:64:d6:8c brd ff:ff:ff:ff:ff:ff link-netnsid 2

Can you give the output of lxc network show <fan network name> please and check if ipv4.nat is set.

As you may be impacted by this fix:

If its not set try running:

lxc network set <fan network name> ipv4.nat true

Thanks it fixed the problem, my network was impacted by this fix.

1 Like