App Containers (OCI) Not Getting IPv4

Hello, I recently updated my system, along with that Incus. However after updating my app containers are no longer getting an IPv4 address (they do get an IPv6 it seems though)

My normal system containers are getting an IPv4 address though so I am not so sure it’s my firewall.

I found a somewhat related topic: OCI Containers Not Getting Dnsmasq Entry - #3 by alex14641

The command given in that comment did fix my issue temporarily (until a restart of the container) but only for Alpine based images…

Has something changed recently to cause this? I am on Incus version 6.12 (previously 6.11)

I have a container with a static IP set and that’s not even getting an IP or internet access.
I checked /var/lib/incus/networks/incusbr0/dnsmasq.leases and the containers set to use DHCP have a lease and IP with it but still no internet access or IP in Incus

Thanks

I believe this issue was fixed in version 6.7.
Please post one of the container logs, and the incus server log.

No LXC logs for any of the containers from what I can see and not much in the Incus server log either from what I can tell


May 12 15:58:10 server systemd[1]: Started Incus Container and Virtual Machine Management Daemon.

May 12 16:03:04 server incusd[55754]: time="2025-05-12T16:03:04-04:00" level=warning msg="Failed getting exec control websocket reader, killing command" PID=59334 err="websocket: close 1005 (no status)" instance=AdGuard interac>

May 12 16:20:47 server incusd[55754]: time="2025-05-12T16:20:47-04:00" level=error msg="Failed starting instance" action=start created="2025-03-10 01:54:46.561858066 +0000 UTC" ephemeral=false instance=LLM instanceType=containe>

May 12 16:20:47 server incusd[55754]: time="2025-05-12T16:20:47-04:00" level=error msg="Failed starting instance" action=start created="2025-02-23 22:51:45.447484251 +0000 UTC" ephemeral=false instance=Container2 instanceType=cont>

May 12 16:21:08 server incusd[55754]: time="2025-05-12T16:21:08-04:00" level=error msg="Failed starting instance" action=start created="2025-02-23 22:51:45.447484251 +0000 UTC" ephemeral=false instance=Container2 instanceType=cont>

May 12 16:23:02 server incusd[55754]: time="2025-05-12T16:23:02-04:00" level=error msg="Failed starting instance" action=start created="2025-03-10 01:54:46.561858066 +0000 UTC" ephemeral=false instance=LLM instanceType=containe>

May 12 16:50:14 server dnsmasq-dhcp[55842]: router advertisement on fd42:ef42:57f1:5ba2::, old prefix for incusbr0

May 12 16:50:14 server dnsmasq-dhcp[55842]: DHCPv6 stateless on fd42:ef42:57f1:5ba2::, constructed for incusbr0

May 12 16:50:14 server dnsmasq-dhcp[55842]: DHCPv4-derived IPv6 names on fd42:ef42:57f1:5ba2::, constructed for incusbr0

May 12 16:50:14 server dnsmasq-dhcp[55842]: router advertisement on fd42:ef42:57f1:5ba2::, constructed for incusbr0

May 12 16:50:14 server dnsmasq[66197]: started, version 2.91 cachesize 150

May 12 16:50:14 server dnsmasq[66197]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset nftset auth DNSSEC loop-detect inotify dumpfile

May 12 16:50:14 server dnsmasq-dhcp[66197]: DHCP, IP range 10.243.146.2 -- 10.243.146.254, lease time 1h

May 12 16:50:14 server dnsmasq-dhcp[66197]: DHCPv6 stateless on incusbr0

May 12 16:50:14 server dnsmasq-dhcp[66197]: DHCPv4-derived IPv6 names on incusbr0

May 12 16:50:14 server dnsmasq-dhcp[66197]: router advertisement on incusbr0

May 12 16:50:14 server dnsmasq-dhcp[66197]: DHCPv6 stateless on fd42:ef42:57f1:5ba2::, constructed for incusbr0

May 12 16:50:14 server dnsmasq-dhcp[66197]: DHCPv4-derived IPv6 names on fd42:ef42:57f1:5ba2::, constructed for incusbr0

May 12 16:50:14 server dnsmasq-dhcp[66197]: router advertisement on fd42:ef42:57f1:5ba2::, constructed for incusbr0

May 12 16:50:14 server dnsmasq-dhcp[66197]: IPv6 router advertisement enabled

May 12 16:50:14 server dnsmasq-dhcp[66197]: DHCP, sockets bound exclusively to interface incusbr0

May 12 16:50:14 server dnsmasq[66197]: using only locally-known addresses for incus

May 12 16:50:14 server dnsmasq[66197]: reading /etc/resolv.conf

May 12 16:50:14 server dnsmasq[66197]: using nameserver 1.1.1.1#53

May 12 16:50:14 server dnsmasq[66197]: using nameserver 1.1.1.2#53

May 12 16:50:14 server dnsmasq[66197]: using only locally-known addresses for incus

May 12 16:50:14 server dnsmasq[66197]: read /etc/hosts - 3 names

May 12 16:50:14 server dnsmasq-dhcp[66197]: read /var/lib/incus/networks/incusbr0/dnsmasq.hosts/AdGuard_AdGuard.eth--1

May 12 16:50:14 server dnsmasq-dhcp[66197]: read /var/lib/incus/networks/incusbr0/dnsmasq.hosts/ddns.eth0

May 12 16:50:14 server dnsmasq-dhcp[66197]: read /var/lib/incus/networks/incusbr0/dnsmasq.hosts/frigate.eth0

May 12 16:50:14 server dnsmasq-dhcp[66197]: read /var/lib/incus/networks/incusbr0/dnsmasq.hosts/Container2.eth0

May 12 16:50:14 server dnsmasq-dhcp[66197]: read /var/lib/incus/networks/incusbr0/dnsmasq.hosts/LLM.eth0

May 12 16:50:14 server dnsmasq-dhcp[66197]: read /var/lib/incus/networks/incusbr0/dnsmasq.hosts/media-server_jellyfin.eth0

May 12 16:50:14 server dnsmasq-dhcp[66197]: read /var/lib/incus/networks/incusbr0/dnsmasq.hosts/media-server_jellyseerr.eth--0

May 12 16:50:14 server dnsmasq-dhcp[66197]: read /var/lib/incus/networks/incusbr0/dnsmasq.hosts/media-server_prowlarr.eth0

May 12 16:50:14 server dnsmasq-dhcp[66197]: read /var/lib/incus/networks/incusbr0/dnsmasq.hosts/media-server_qbittorrent.eth--1

May 12 16:50:14 server dnsmasq-dhcp[66197]: read /var/lib/incus/networks/incusbr0/dnsmasq.hosts/media-server_radarr.eth0

May 12 16:50:14 server dnsmasq-dhcp[66197]: read /var/lib/incus/networks/incusbr0/dnsmasq.hosts/media-server_sonarr.eth0

May 12 16:50:14 server dnsmasq-dhcp[66197]: read /var/lib/incus/networks/incusbr0/dnsmasq.hosts/rpi.eth0

May 12 16:50:14 server dnsmasq-dhcp[66197]: read /var/lib/incus/networks/incusbr0/dnsmasq.hosts/test2.eth0

May 12 16:50:14 server dnsmasq-dhcp[66197]: read /var/lib/incus/networks/incusbr0/dnsmasq.hosts/Test.eth0

May 12 16:53:02 server incusd[55754]: time="2025-05-12T16:53:02-04:00" level=error msg="Failed starting instance" action=start created="2025-02-23 22:51:45.447484251 +0000 UTC" ephemeral=false instance=Container2 instanceType=cont>

May 12 16:53:02 server incusd[55754]: time="2025-05-12T16:53:02-04:00" level=error msg="Failed starting instance" action=start created="2025-03-10 01:54:46.561858066 +0000 UTC" ephemeral=false instance=LLM instanceType=containe>

May 12 16:59:03 server dnsmasq-dhcp[55939]: router advertisement on fd42:c90:f661:b18f::, old prefix for media-server

May 12 16:59:03 server dnsmasq-dhcp[55939]: DHCPv6 stateless on fd42:c90:f661:b18f::, constructed for media-server

May 12 16:59:03 server dnsmasq-dhcp[55939]: DHCPv4-derived IPv6 names on fd42:c90:f661:b18f::, constructed for media-server

May 12 16:59:03 server dnsmasq-dhcp[55939]: router advertisement on fd42:c90:f661:b18f::, constructed for media-server

May 12 16:59:03 server dnsmasq[71884]: started, version 2.91 cachesize 150

May 12 16:59:03 server dnsmasq[71884]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset nftset auth DNSSEC loop-detect inotify dumpfile

May 12 16:59:03 server dnsmasq-dhcp[71884]: DHCP, IP range 10.193.213.2 -- 10.193.213.254, lease time 1h

May 12 16:59:03 server dnsmasq-dhcp[71884]: DHCPv6 stateless on media-server

May 12 16:59:03 server dnsmasq-dhcp[71884]: DHCPv4-derived IPv6 names on media-server

May 12 16:59:03 server dnsmasq-dhcp[71884]: router advertisement on media-server

May 12 16:59:03 server dnsmasq-dhcp[71884]: DHCPv6 stateless on fd42:c90:f661:b18f::, constructed for media-server

May 12 16:59:03 server dnsmasq-dhcp[71884]: DHCPv4-derived IPv6 names on fd42:c90:f661:b18f::, constructed for media-server

May 12 16:59:03 server dnsmasq-dhcp[71884]: router advertisement on fd42:c90:f661:b18f::, constructed for media-server

May 12 16:59:03 server dnsmasq-dhcp[71884]: IPv6 router advertisement enabled

May 12 16:59:03 server dnsmasq-dhcp[71884]: DHCP, sockets bound exclusively to interface media-server

May 12 16:59:03 server dnsmasq[71884]: using only locally-known addresses for incus

May 12 16:59:03 server dnsmasq[71884]: reading /etc/resolv.conf

May 12 16:59:03 server dnsmasq[71884]: using nameserver 1.1.1.1#53

May 12 16:59:03 server dnsmasq[71884]: using nameserver 1.1.1.2#53

May 12 16:59:03 server dnsmasq[71884]: using only locally-known addresses for incus

May 12 16:59:03 server dnsmasq[71884]: read /etc/hosts - 3 names

What’s the output of

incus version

Client version: 6.12
Server version: 6.12

I was looking through the changelog for Incus 6.12 and noticed this PR: OCI improvements by stgraber · Pull Request #1873 · lxc/incus · GitHub

Just out of curiosity I made a patch to revert that PR, applied it and to my surprise all of my OCI containers are getting an IP Address now. I am not familiar with the Incus code base enough to really say what caused this however.

On 6.12, do you see anything useful in the forkdns log that should now be present in /var/log/incus/INSTANCE-NAME/ ?

There is no forkdns log. Only

console.log                         forknet-dhcp.log                    lxc.log                             proxy.docker-port-0.0.0.0-9696.log  
forkexec.log                        forkstart.log                       lxc.log.old                         

What’s in forknet-dhcp.log?

time="2025-05-12T13:22:01+08:00" level=info msg="running dhcp" interface=eth0
time="2025-05-12T13:22:01+08:00" level=error msg="Giving up on DHCP, couldn't bring up interface" interface=eth0

Seems to be that for all containers

This normally would suggest that eth0 didn’t exist in the container.

Can you show incus config show --expanded NAME for one of the affected containers.

❯ incus config show --expanded tdfggdf
architecture: x86_64
config:
  environment.HOME: /root
  environment.PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  environment.TERM: xterm
  environment.TZ: Asia/Shanghai
  image.architecture: x86_64
  image.description: docker.io/jeessy/ddns-go (OCI)
  image.id: jeessy/ddns-go
  image.type: oci
  oci.cwd: /app
  oci.entrypoint: /app/ddns-go -l :9876 -f 300
  oci.gid: "0"
  oci.uid: "0"
  volatile.base_image: 88bf5ce3da586fea710af0bfd0f40a779987b605da6c0cdd94fb3bbc04152585
  volatile.cloud-init.instance-id: d0cdd584-d0ad-4d1a-a305-b243b16af676
  volatile.container.oci: "true"
  volatile.eth0.hwaddr: 10:66:6a:15:cd:fe
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: STOPPED
  volatile.last_state.ready: "false"
  volatile.uuid: 036f992a-7366-42b2-b5bb-e0ffde6d0108
  volatile.uuid.generation: 036f992a-7366-42b2-b5bb-e0ffde6d0108
devices:
  eth0:
    name: eth0
    network: incusbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

I believe the error log might be misleading. The actual issue seems to occur here, where the code attempts to execute:

ip link set dev l.Name up

Since the interface is likely already up, it may return an unexpected value instead of nil, potentially causing unintended behavior. The relevant function can be seen in this part of the code.

This is just an initial observation—I haven’t tested or debugged it yet, as I’m not entirely sure how to approach debugging in this case.

I also see this in

Client version: 6.12
Server version: 6.12

It am sure it was working for a few months this year. I too am running incus on a self managed bridge br0.

The only workaround in another container I found, is I run dhcpclient in the incus init before the container init. This container does not have a dhcp client as far as I can see. I’m not even sure how it works. Some systemd magic maybe.
Anyway, what I did was set a reserved lease in technitium and used caddy to forward to the IP from a name, as the IP can’t be re-used. Seems to work.

incus config show --expanded viseron


architecture: x86_64
config:
  environment.DEBIAN_FRONTEND: noninteractive
  environment.HOME: /root
  environment.LD_LIBRARY_PATH: :/usr/local/lib
  environment.OPENCV_OPENCL_CACHE_ENABLE: "false"
  environment.PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/abc/bin
  environment.PG_COLOR: always
  environment.PGDATA: /config/postgresql
  environment.PYTHONPATH: :/usr/local/lib/python3.10/site-packages
  environment.S6_KEEP_ENV: "1"
  environment.S6_KILL_FINISH_MAXTIME: "30000"
  environment.S6_KILL_GRACETIME: "30000"
  environment.S6_SERVICES_GRACETIME: "30000"
  environment.TERM: xterm
  environment.VISERON_GIT_COMMIT: 965cde7522a237ebcf3053b0cef385b154bb85fe
  environment.VISERON_VERSION: 3.1.2
  image.architecture: x86_64
  image.description: docker.io/roflcoopter/viseron (OCI)
  image.id: roflcoopter/viseron:latest
  image.type: oci
  oci.cwd: /src
  oci.entrypoint: /init
  oci.gid: "0"
  oci.uid: "0"
  volatile.base_image: 849b9948d58e6e78bec304267503585390cab3a74b31fa8e5b1fdae2dd6055c4
  volatile.cloud-init.instance-id: ad9eb706-60af-4080-ac88-a5d09d1287b3
  volatile.container.oci: "true"
  volatile.eth0.host_name: vethbbd30dfa
  volatile.eth0.hwaddr: 10:66:6a:06:e8:53
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: d955426a-fa5d-4628-ae60-5373100cffa7
  volatile.uuid.generation: d955426a-fa5d-4628-ae60-5373100cffa7
devices:
  card0:
    path: /dev/dri/card0
    source: /dev/dri/card0
    type: unix-char
  config:
    path: /config
    source: /zdata/viseron/config
    type: disk
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  event_clips:
    path: /event_clips
    source: /zdata/viseron/event_clips
    type: disk
  intel_gpu:
    gputype: physical
    pci: "0000:00:02.0"
    type: gpu
  renderD128:
    path: /dev/dri/renderD128
    source: /dev/dri/renderD128
    type: unix-char
  root:
    path: /
    pool: local
    type: disk
  segments:
    path: /segments
    source: /zdata/viseron/segments
    type: disk
  shared:
    path: /storage
    source: /zdata/shared
    type: disk
  snapshots:
    path: /snapshots
    source: /zdata/viseron/snapshots
    type: disk
  thumbnails:
    path: /thumbnails
    source: /zdata/viseron/thumbnails
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

I also notice this incus process using masses of CPU:

3170553 root        20   0 7024M 50200 21180 S  37.9  0.1 44:51.57 /proc/592578/exe forknet dhcp /var/lib/incus/containers/viseron/network /var/log/incus/viseron/forknet-dhcp.log

and the log has failure to chooch:

root@mega:/var/log/incus/viseron# vi forknet-dhcp.log
time="2025-05-29T10:32:11+10:00" level=info msg="running dhcp" interface=eth0
time="2025-05-29T10:32:48+10:00" level=error msg="Giving up on DHCPv6, error during DHCPv6 Solicit" error="no matching response packet received"
time="2025-05-29T10:32:48+10:00" level=error msg="DHCP client failed" error="no matching response packet received"

This incus server has 30 other homelab server non app containers on the bridge br0 that have had zero problems. I have run lxd and then incus for years. It’s awesome.

Upgraded to 6.13
Still seeing this use a whole core:
/proc/3378545/exe forknet dhcp /var/lib/incus/containers/viseron/network /var/log/incus/viseron/forknet-dhcp.log