Container with routed NIC can't ping its "neighbour" IP Address, since it's also its own broadcast address?

Sorry if this has been asked before, but I couldn’t find anything…

I’m running multiple containers on a server. Since the ISP doesn’t allow me to use mutliple mac addresses, I’m using a routed setup. Each container has its own profile like this:

   nictype: routed
   type: nic

This works perfectly well for almost everything I need, but there is one issue:
Some of the containers are in the same subnet and therefore have consecutive IP addresses. Say,,,
While e.g. .4 and .7 can communicate just fine, .4 and .5 can’t, .5 and .6 can’t, etc.
I think this is because .5 seems to be the broadcast address of .4.

If I am inside and run ping, I get an error like this:

Do you want to ping broadcast? Then -b. If not, check your local firewall rules.

And when checking with ip addr, indeed this is the broadcast address:

inet brd scope global eth0

Is there any simple way to fix this? Right now, sadly, two discourse instances can’t access my mail server to send emails because they are its IP-neightbours…

Note: There are also containers on the host with completely different IPs. But some of them are consecutive.


That suggests what ever is configuring the IP inside the container isn’t getting it quite right as on my setup in LXC it is indeed possible to reach each adjacent IP from each one, see.

inet brd scope global eth0

Hmm interesting. I thought LXD has the sole responsibility to configure this network interface?
At least I didn’t configure it anywhere inside the container.

All containers are debian buster systems and were imported via lxd-p2c.
Network-Manager was uninstalled after importing and /etc/network/interfaces is mostly empty:

auto lo
iface lo inet loopback

Funnily enough, I just noticed that not all containers have this issue.
Some list their own IP as broadcast, some list their own IP+1 as broadcast (none of them lists though). Those who list their own IP don’t seem to have this issue.

What else could be responsible for configuring the broadcast address in this way?

Interesting, I’ll try and recreate, it maybe that somehow LXD/liblxc isn’t setting the explicit broadcast address which then leaves the OS (incorrectly) guessing what it should be.

Either that or it may be a regression in liblxc that LXD uses.

What version of LXD are you using?

This is lxd 4.16 installed from snap packages. Let me know if you need more info (e.g. systemctl status in a wrongly configured host or something to see what’s running)

Actually this looks like a regression in liblxc (cc @brauner )

In LXC 4.0.6 (the package that is in Ubuntu Focal) the router veth interface gets configured as:

inet brd scope global eth0

And in current main branch (and the one bundled with LXD 4.16 it seems), it gets configured as:

inet brd scope global eth0

OK so I tracked it down to this commit @brauner

Before that the broadcast is set to and after that its set to (this particular value changes based on the IP address set and isn’t always the same as the address set, sometimes it is the IP after the address that is specified). Either way it is incorrect though.

1 Like

Specifying in the liblxc config a zero broadcast address (or setting it to seems to work. =

But the change in result of calculation between the commits seems of concern.

This should fix it:

Awesome, thank you!
Is there a simple way to get these patches into my snap install? Or alternatively: how often are the snaps updated/when can I expect this to reach the snap?

I’m sure @stgraber will be able to push it to the latest/stable channel shortly after its merged.

Sorry to bother again - I noticed that the latest/stable channel was updated last week, so I immediately installed that new version and restarted snap.lxd.daemon.service. However, the containers still seem to get these wrong broadcast addresses? Is the patch not included in the new latest/stable version? Do I have to switch to latest/edge to get that?

Yes it doesn’t look like @stgraber has cherry-pick this one yet into stable. It will be included in 4.18 at least.