OCI Containers Not Getting Dnsmasq Entry

Hi-

I have the standard lxdbr0 bridge from an incus conversion and observed that OCI containers get a 10.34… address but are not being added to the var/lib/incus/networks/lxdbr0/dnsmasq.leases file. As such, I can not ping the oci containers on the lxdbr0 bridge with container name, but can ping using the 10.34… address.

The only entries appear to be standard incus containers. I do not see the IPV4 address for the incus container either, but not sure that I am reading the file correctly.

Is this correct behavior?

Dnsmasq.leases file:

1726486681 00:16:3e:c0:f9:a0 10.34.66.49 storage-demo 01:00:16:3e:c0:f9:a0
1726487291 00:16:3e:c8:f0:b3 10.34.66.48 test 01:00:16:3e:c8:f0:b3
duid 00:01:00:01:2b:36:b2:9d:9c:7b:ef:3d:cd:c6

What Incus version is that on?

I had a similar issue: I solved it by running udhcpd in all my OCI containers.

/sbin/udhcpc -b -R -p /var/run/udhcpc.eth0.pid -i eth0 -x hostname:$HOSTNAME -x lease:300

Please note that my containers are all based on Alpine linux. You will have to use the dhcp client for your container OS.

Hi Stephane-

I am using Incus 6.4 on an Ubuntu 22.04 host converted from lxd to incus. I see I received a response indicating that it may be based on Alpine OCI images. These images are based on Alpine which as you know are very popular.

By the way, uptime kuma OCI image is debian 10 image and also has the same issue.

I would appreciate your thoughts?

The solution posted by alex14641 which I have not tried does not seem ideal because I assume when the OCI images get refreshed the change will be lost.

Hi Alex14641- Thanks for your response. I have not tried your solution. My images are Alpine based as well.

What happens when the OCI image get refreshed? I assume the change will be lost. I ask Stephane for his thoughts.

This is more of a quick fix: you will also have to restart the daemon when the container is restarted. I’m working on adding the daemon to the startup scripts for the container.

Thanks- This issue appears more related to the lxdbr0 bridge and dnsmaq process and does not seem specific to Alpine OCI containers. I am running Uptime Kuma OCI container and had the same dns entry problem with this debian 10 based image.

Also, interesting I am running an Openwrt container with another bridge interface under a 10.50… network and getting a dns entry in the openwrt router for the OCI containers.

I am hoping that Stephane could opine if this is a bug with lxdbrO bridge and Dnsmaq process or expected behavior.

thanks

6.5 sends the host name to dnsmasq so should have DNS records.

Nothing will happen to the existing containers unless you run incus rebuild to have them reset to the new image.

Hi- I just upgrade to 6.5 and can confirmed it worked. It looks like the entire dnsmaq.leases file was refreshed and I can see both the IPV4 and IPV6 dns entries.

My use case was setting up Traefik reverse proxy and did not want to hardcode the ips for the container services. I assume this should work fine if Traefik is also on the lxdbr0 bridge.

You are awesome and I have been amazed how much progress you have made on the incus project in such a short time. Thanks for all your efforts.

By the way, I did not rebuild any of the OCI images on the lxbr0 bridge and all now have dns entires. The entire dnsmaq.leases file appears to have been rebuilt with the upgrade to 6.5.

Maybe I declared victory too soon on this issue. I know when I upgraded to 6.5, all the OCI containers were in the dnsmaq.leases file. Now it seems, I only see the native incus containers and the one debian based OCI container (Uptime Kuma). All the other OCI containers are now missing from the file which by the way are Alpine linux based.

Maybe the other poster in this tread was correct and Alpine images are an issue. I did run this command that appeared to resolve the issue in one of containers:

/sbin/udhcpc -b -R -p /var/run/udhcpc.eth0.pid -i eth0 -x hostname:$HOSTNAME -x lease:300

Alex14641- were you using incus version 6.5 and having the issue with dns entries for Alpine OCI containers?

I have this issue with 6.4 and 6.5. I noticed this issue while setting up incus DNS: the containers would initially have IPv4 addresses (A records); then after a period of time the address would be gone. I suspect that any Docker container would have this problem.

I deleted the dnsmasq.leases file and rebooted the server. All the Ixdbr0 DNS entries were back. I then deleted and added some containers and everything was correctly being added and deleted. I checked a few hours later and all the OCI DNS addresses were gone.

I rebooted the server again this time leaving the dnsmasq.leases file alone and the dns addresses appeared again. I think this is impacting all OCI images and not just Alpine.

Thanks

Thanks for confirming the issue as I was testing changing the lease expiry key. For now it seems a simple solution is to change the key to the high value to avoid the lease from being renewed as below:.

  • Set the ipv4.dhcp.expiry config option of the network to a high value like 8765h .