LXD Container connected to bridge cannot ping gateway

Hi,

I need some help with configuring LXD container (172.32.0.24) to ping gateway address (172.32.0.1) so that eventually can access internet. The following are my config files.

$lxc --version
3.0.3
#####################
File1- /etc/network/interfaces.d/50-cloud-init.cfg
#####################
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
########################
File2- /etc/network/interfaces.d/55-eth1.cfg
#######################
auto eth1
iface eth1 inet manual
#######################
File 3- /etc/network/interfaces.d/60-lxdbr1.cfg
######################
auto lxdbr1
iface lxdbr1 inet static
address 172.32.11.73
netmask 255.255.240.0
broadcast 172.32.15.255
bridge_ports eth1
bridge_ifaces eth1
up ip link set eth1 up
bridge_stp off
bridge_fd 0
bridge_hello 2
bridge_maxage 0
up ip route add default via 172.32.0.1 dev lxdbr1 table 10001
up ip route add default via 172.32.0.1 dev lxdbr1 metric 10001
172.32.0.0/20 dev lxdbr1 proto kernel scope link src 172.32.11.73 table 10001
up ip rule add from 172.32.11.73 lookup 10001
up ip route add 172.32.0.24 dev lxdbr1 table 10001
up ip rule add to 172.32.0.24 lookup 10001
up ip rule add from 172.32.0.24 lookup 10001
##########################
$ip r
########################
default via 172.31.0.1 dev eth0
default via 172.32.0.1 dev lxdbr1 metric 10001
172.31.0.0/20 dev eth0 proto kernel scope link src 172.31.2.20
172.32.0.0/20 dev lxdbr1 proto kernel scope link src 172.32.11.73
######################
$ip a
####################
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet 172.31.2.20/20 brd 172.31.15.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master lxdbr1 state UP group default qlen 1000
link/ether xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet6 scope link
valid_lft forever preferred_lft forever
4: lxdbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet 172.32.11.73/20 brd 172.32.15.255 scope global lxdbr1
valid_lft forever preferred_lft forever
inet6 scope link
valid_lft forever preferred_lft forever
5: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet6 scope link
valid_lft forever preferred_lft forever
7: veth5O8A42@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr1 state UP group default qlen 1000
link/ether xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 scope link
valid_lft forever preferred_lft forever
9: veth26SDUM@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr1 state UP group default qlen 1000
link/ether xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 scope link
valid_lft forever preferred_lft forever
###########################

From the host (lxdbr1) I can ping container,
From the host (lxdbr1) I can ping gateway
From the host (lxdbr1) I can ping internet
From inside the container (172.32.0.24), I can ping host
From inside container(172.32.0.24), I CANNOT ping gateway!

Appreciate your help.

Thanks

Are you attempting to join the containers to the same LAN as the host’s network, or are you expecting to have outbound packets NATted to the host’s IP on the LAN?

I am attempting to join the containers to the same LAN as the hosts network.

OK so you’re routing config looks a bit more complicated than what you need.

Here is a minimal example of how to setup a host bridge interface called br0.

Then you just need to ensure your container has bridged NIC device attached to it with the parent of the bridge interface.

Hi Tomp,

Thanks for the feedback.

The routing is like that because I have two interfaces on two different networks
eth0 = 172.31.2.20/20
lxdbr1 = 172.32.11.73/20

The cloud provider where I host does not allow me to create a bridge from eth0. So I have to use another interface eth1.

This creates routing issues that I need to resolve. So the minimal setup is not sufficient

Any ideas…

Hi, I’m not sure, I do not fully understand why you have your routing config with routing tables and routing rules setup the way you do.

However ignoring that for now, you’re best bet is to perform some basic network diagnosis so you can frame the problem.

I would suggest setting up a ping from the container to the gateway and then running tcpdump on; first lxdbr1 checking that the ping packets (or more likely the ARP resolution packets) are coming from the container to the host. Then if you can see the packets arriving and ARP reply packets not being received then move the tcpdump test to eth1.

If you can see ARP resolution packets going out of eth1 but are getting no reply then you can be confident the issue is upstream of your host. Does your provide not allow multiple MAC addresses on a single port perhaps?

If you can see the ICMP or ARP resolution packets leaving eth1 then it points to a routing problem on your host, at which point you should try and see which interface (if any) those packets are going out of.

Having established the following:
From the host (lxdbr1) I can ping container,
From the host (lxdbr1) I can ping gateway
From the host (lxdbr1) I can ping internet
From inside the container, I can ping the host IP on lxdbr1
From inside container, I CANNOT ping gateway!
tcpdump results are as follows
tcpdump -i lxdbr1 arp
ARP, Request who-has 172.32.0.1 tell 172.32.2.47, length 28
ARP, Reply 172.32.0.1 is-at 02:d5:51:72:8d:e6 (oui Unknown), length 42
ARP, Request who-has 172.32.2.47 tell 172.32.0.1, length 42
ARP, Reply 172.32.2.47 is-at 02:96:bf:44:9a:7a (oui Unknown), length 28
ARP, Request who-has 172.32.0.1 tell 172.32.2.48, length 28
ARP, Request who-has 172.32.0.1 tell 172.32.2.48, length 28
ARP, Request who-has 172.32.0.1 tell 172.32.2.48, length 28
ARP, Request who-has 172.32.0.1 tell 172.32.2.48, length 28
ARP, Request who-has 172.32.0.1 tell 172.32.2.48, length 28

tcpdump -i lxdbr1 arp
[same data as above]

172.32.0.1 = Gateway
172.32.2.48 = Container
172.32.2.47 = Host

As can be seen, there is no ARP reply when container requests for MAC of the gateway.
There seems to be ARP request and reply between container and host.
On host “arp -a” returns:
(172.32.0.1) at 02:d5:51:72:8d:e6 [ether] on lxdbr1
(172.31.0.2) at 02:48:48:40:92:c8 [ether] on eth0
(172.31.0.1) at 02:48:48:40:92:c8 [ether] on eth0

On container “arp -a” returns:
(172.32.2.47) at 02:96:bf:44:9a:7a [ether] on eth0
(172.32.0.1) at <incomplete> on eth0

This leads me to think your provider is not allowing multiple MAC addresses to appear on a single port.

Have you tried IPVLAN instead?

Thanks I will try IPVLAN

In the next LXD, 3.19, release we will also have a “routed” NIC type that will also share the host’s MAC address, but will also allow your containers to communicate with the host. So that may be a better fit.

Sounds very good. Thanks, can you refer me to any documentation on “routed” NIC

Look out for the release notes for LXD 3.19

Hey tomp,

From my search I can only see release notes for up to LXD 3.18.

Maybe something I am doing wrong?

LXD 3.19 has not been released yet. There will be in announcement for this here at the forum.
It should be released very soon.

Thanks