LXD containers lost their internal IPs

Hi,

We have 10 running containers on our server. All of a sudden, all of them lost their internal ips (both IPv4 & IPv6) at some point today. Not sure when exactly, but they were running OK, with IPs assigned, yesterday last time I’ve checked. Noone was connected to the server nor making any configuration changes meanwhile.

I’ve restarted the daemon itself now with sudo systemctl reload snap.lxd.daemon , and after couple of minutes they all got back their IPs assigned. I cannot say for sure if the restart fixed it, but I hope it did.

By looking at lxd.log I did not find anything suspicious there.

Let me know if you need some logs or config extracted so you can look into it. What might have caused this behaviour?

Thanks

Most likely dnsmasq crashed or was killed by something on your system, reloading LXD will have it restart dnsmasq, fixing the issue.

After having closer look into syslog around that time, I am seeing eth0 interface name change, which I believe caused dnsmasq process to crash. No idea what might have caused such a behaviour. This is EC2 instance we’re running lxd on.

Feb 12 15:08:21 ip-10-4-18-6 systemd-networkd[881]: eth0: Interface name change detected, eth0 has been renamed to veth505f87a6.
Feb 12 15:08:21 ip-10-4-18-6 kernel: [1294991.637020] lxdbr0: port 3(veth448d3d73) entered disabled state