Network bridge - not working apparently

my scenario

bare metal LXD host: ubuntu 14.04 server
LXD version: 2.16 (running from the snap package)
Container: ubuntu 16.04 (with nginx installed, so it should server the nginx startpage to port 80 I assume)

expected behaviour

  • the container get’s an v4 IP address from the 172.20.14.xxx range assigned
  • I can see the nginx startpage in a browser from any client in my local network (172.20.14.xxx)
  • I can ping the nginx server from any client and likewise can to ping any client from inside the container

actual behavior

  • the container shows no v4 IP address assigned to it with the br0 profile
  • the container shows v4 IP address 10.146.203.174 (eth0) assigned to it with the default profile

current network settings

inside the container

:~# cat /etc/network/interfaces.d/50-cloud-init.cfg

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

on my LXD host

cat /etc/network/interfaces
auto p5p1
iface p5p1 inet manual

auto br0
iface br0 inet static
address 172.20.14.21
gateway 172.20.14.1
netmask 255.255.255.0
network 172.20.14.0
broadcast 172.20.14.255

bridge_ports p5p1
bridge_fd 9
bridge_hello 2
brifge_maxage 12
bridge_stp off
dns-nameservers 172.20.14.1

the LXD profile assigned to my nginx container

~$ lxc profile show br0
config: {}
description: Default LXD profile
devices:
eth0:
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: test-storage
type: disk
name: br0
used_by:

  • /1.0/containers/nginx

any pointers what is going wrong here?

  1. Do you have a DHCP server running on that network?
  2. Can you post “lxc config show --expanded” for that container?
  1. yes there is a DHCP server running on my router @172.20.14.1, I’ll try a static IP in the container and report back whether that helps anything
  2. here you go

`:~$ lxc config show --expanded nginx
architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 16.04 LTS amd64 (release) (20170803)
image.label: release
image.os: ubuntu
image.release: xenial
image.serial: “20170803”
image.version: “16.04”
volatile.base_image: e18a3ad7d8b1978670bfa01d03ca4736d3a221a4dd38dab406062cdb09245381
volatile.eth0.hwaddr: 00:16:3e:0a:ac:5f
volatile.eth0.name: eth0
volatile.idmap.base: “0”
volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.power: RUNNING
devices:
eth0:
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: test-storage
type: disk
ephemeral: false
profiles:

  • br0
    stateful: false
    description: “”`

can it be that is a bump in the snap package?

with a static IP in the container …

auto eth0
iface eth0 inet static
address 172.20.14.111
netmask 255.255.255.0
gateway 172.20.14.1
network 172.20.14.0
dns-nameservers 172.20.14.1

… I get the IP addresse list on lxc list. but neither can see nginx, not ping in or out of the container

I can, however, ping my LXD host (but I think that’s a given)

Are you running on VMWare by any chance?

A lot of what you’re seeing could be explained by mac filtering on your host’s network interface like what VMware does on all its networks (but it’s not the only such case).

Oh, you said bare metal, so I guess not.

no, it’s a bare metal ubuntu 14.04

So with a static IP you can ping your host and your host can ping the container?
Can your host access nginx in the container too?

  • host can ping container, container can ping host

  • container can not ping local network, local network can not ping container

  • local network can ping host, and vice versa

actually I had a pretty similar problem (and could not solve it) when testing this in a Virtualbox VM
I was thinking it was a VBox problem, but now get my doubts about that thought

The behavior above exactly matches what you’d get if there was some kind of mac filtering going on somewhere… What’s your network infrastructure like outside of your host? Could it be that the switch your host is connected to is doing mac filtering?

I think you’re now at the stage where running tcpdump against p5p1 is probably the best way to figure out what’s going on. See whether any of the incoming traffic from your other machines even reaches the host and see if the traffic from the container is making it out of the host.

Oh, I forgot. Bridge firewalling on Linux could also explain this weird behavior.

sudo iptables -L -n -v

May be useful here to see if there’s any such firewalling going on.

(yes, that’s firewalling on the same L2 subnet which is weird, but Linux does let you do that and on some kernels will have that enabled by default, so your rules may affect more than what you think they will)

that’s a lot to digest (another reminder of how I have only scratch the surface of networking so far) … will poke around and see what I can come up with.

My Network is being managed by an Asus RT-N66U Router, which connects with a modem provided by my ISP which I have no access to.

In the so called “Network Map” of the Router I can see the static IP of my container. So to some extent the router is aware of it’s existence but can’t ping it.

as far as I can tell there is not MAC filtering going on in that aforementioned router. The only place where I can even find such an option is in the Wireless settings, so even if that was enabled it should not be able to do anything to the problematic server (who is wired to my network)

~$ sudo iptables -L -n -v
Chain INPUT (policy ACCEPT 4947K packets, 5879M bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp – lxdbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 /* generated for LXD network lxdbr0 /
629 41678 ACCEPT udp – lxdbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 /
generated for LXD network lxdbr0 /
127 41656 ACCEPT udp – lxdbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67 /
generated for LXD network lxdbr0 */
0 0 ACCEPT udp – virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53
0 0 ACCEPT tcp – virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
0 0 ACCEPT udp – virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67
0 0 ACCEPT tcp – virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67

Chain FORWARD (policy DROP 145K packets, 45M bytes)
pkts bytes target prot opt in out source destination
288K 434M ACCEPT all – * lxdbr0 0.0.0.0/0 0.0.0.0/0 /* generated for LXD network lxdbr0 /
214K 13M ACCEPT all – lxdbr0 * 0.0.0.0/0 0.0.0.0/0 /
generated for LXD network lxdbr0 */
0 0 ACCEPT all – * virbr0 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED
0 0 ACCEPT all – virbr0 * 192.168.122.0/24 0.0.0.0/0
0 0 ACCEPT all – virbr0 virbr0 0.0.0.0/0 0.0.0.0/0
0 0 REJECT all – * virbr0 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
0 0 REJECT all – virbr0 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
2201K 614M DOCKER-ISOLATION all – * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all – * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all – * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all – docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all – docker0 docker0 0.0.0.0/0 0.0.0.0/0

Chain OUTPUT (policy ACCEPT 4304K packets, 5602M bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp – * lxdbr0 0.0.0.0/0 0.0.0.0/0 tcp spt:53 /* generated for LXD network lxdbr0 /
611 69120 ACCEPT udp – * lxdbr0 0.0.0.0/0 0.0.0.0/0 udp spt:53 /
generated for LXD network lxdbr0 /
126 42593 ACCEPT udp – * lxdbr0 0.0.0.0/0 0.0.0.0/0 udp spt:67 /
generated for LXD network lxdbr0 */
0 0 ACCEPT udp – * virbr0 0.0.0.0/0 0.0.0.0/0 udp dpt:68

Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination

Chain DOCKER-ISOLATION (1 references)
pkts bytes target prot opt in out source destination
2201K 614M RETURN all – * * 0.0.0.0/0 0.0.0.0/0

the only thing that strikes me here is that my br0 does not appear

Your FORWARD table defaults to a policy of DROP without any rule explicitly accepting the traffic from your containers, so that’s going to drop anything going in or out of your containers.

The default policy is ACCEPT, so something on your system (my guess would be whatever setup the DOCKER chains) changed this to DROP and is now blocking all other routed traffic.

iptables -P FORWARD ACCEPT

Will fix the issue by setting the default policy back to ACCEPT, though that won’t survive a reboot, so you should get to the bottom of whatever made that change on your system.

3 Likes

that DOES actually work. thanks. I guess I’ll get diving into iptables then.

Thanks, this actually helped me as well. I was banging my head in the wall, and the strange is that there was indeed FORWARD rules present for the lxdbr0 interface… Very odd. Pinging the default gw worked, it just wouldn’t forward the traffic like it should.

Anyway, fixing it like this lets me postpone digging deeper into it for a while, since I don’t reboot very often. :joy: Again, thanks!