Launching a published container, they'll have different MAC's but equal IP's

My LXD is configured with the default lxdbr0 bridge and this is the only network interface listed in the default profile.

I’m doing

lxc snapshot alpine snapshot
lxc publish alpine/snapshot
lxc launch sha alpine1

And repeating that last line a few times with subsequent numbers. I’m surprised to find that all these alpine containers are configured for DHCP, they all have different MAC addresses, but they all use just like the original container.

Is that the expected behavior?

Well this isn’t what happens on 3.10.

So the next question is, how do I patch lxd’s DHCP behavior to give these containers proper IP addresses short of paving my box and reinstalling or upgrading lxd?

So, what’s the version of your current LXD instance?

LXD provides dnsmasq for its managed network interfaces, both for DNS and DHCP.
Each container should and does get a new random MAC address, which is what is needed by dnsmasq to give a fresh IP address.

The only thing I can imagine, is that the snapshot somehow retained info from the old DHCP lease and new instances do not ask for a new DHCP lease.
It is not clear how different versions of LXD would affect this.

lxc --version gives me 3.0.3.

Sorry I didn’t check these containers individually. In fact if I do

lxc launch images:alpine/3.8 newalpine
lxc console newalpine
cat /etc/network/interfaces

this container will use DHCP, but in fact it turns out looking at the original alpine container it somehow got a static address.

Next question is, how? I snapshotted it and then published it. I also exported it and then imported it back in because I wanted to get rid of the loop pool that was created by default.

What had this unexpected side effect? I can copy and publish/launch with eth0 remaining dhcp, but I’m curious where the static configuration comes from – I don’t like it at all.

What do you mean with it somehow got a static address? A static IP address is an address that you specify in the configuration file. Is the line iface eth0 inet dhcp really missing from the configuration file?

Yes, lxc file edit alpine/etc/network/interfaces gives me

auto eth0
iface eth0 inet static

So that alpine1 and so forth show the exact same configuration is not surprising, but that the original container had this after moving it back and forth in a few ways is.

This looks like an Alpine issue. As if the Alpine container image took the DHCP lease and converted it into a static network configuration.

If you get a complete set of instructions that can reproduce this issue on LXD 3.0.x, then it would be useful to figure out what to do.

I will probably have to re-install on the default loop btrfs pool to reproduce if at all. One difference I notice exporting and importing into the same pool now is that I get only single .tar.gz files. When I exported before I would get two files, one I believe .squashfs. Not sure how or why or whether it even matters.

Would I reinstall I would try hard if possible to immediately configure ZFS or btrfs proper, but it would be great if lxd could make some additions to facilitate moving images or image trees between pools without resorting to hacks while the deamon is stopped and hoping it might start after hacking - I read it could be done by directly editing the SQLite database for example and amongst others.

This is how I get a container with the same MAC address as another (volatile.eth0.hwaddr: 00:16:3e:fc:7e:f3) and the same IP address (, but they’re both on DHCP so this isn’t exactly a repro:

lxc launch images:alpine/3.8 repro
lxc launch images:alpine/3.8 repro-restored
lxc stop repro && lxc stop repro-restored
lxc snapshot repro snapshot
lxc restore repro-restored repro/snapshot
lxc start repro-restored && lxc exec repro-restored -- ifconfig
lxc start repro && lxc exec repro -- ifconfig

Could argue this isn’t quite right either, but seeing the repro container will at least be off when starting the repro-restored container the latter wasn’t immediately in error when it started with the MAC address it had before the snapshot.

Another issue is that it responds to ping repro from since it is listed in hosts but its hostname matches the new container name repro-restored. However pinging repro-restored will get 100% loss to an IP-address that isn’t listed at all with lxc list (

Also the original repro container will not list any IPv6 address anymore. If I start them in opposite order the repro-restored container will not list any IPv6 address. So that DHCP client is just better than the IPv4 one I guess.