No IPv4 for container on Fedora even when firewall is down

Hello,

I can’t get any container on Fedora 43 to reach the network (incus installed from dnf). I’ve put the bridge in ‘trusted’ zone and enabled masquerading for the ‘trusted’ zone

sudo firewall-cmd --zone=trusted --change-interface=incusbr0 --permanent
sudo firewall-cmd --reload

I tried issuing a DHCP request from the container directly. The request get sent to the host but no reply comes back. dnsmasq is up and listening on the host (I can see packets arriving with tcpdump). In case it was still the firewall somehow, I stopped the firewall on the host, restarted the container, still no success.

(I should say that I don’t understand networking very much and a lot of this followed from AI guidance, which may well be all wrong!)

How could I go about diagnosing this problem? Can you see anything wrong in the setup eblow?

I really love incus when it does work and I would appreciate any help to put back in order (I had stopped using it for a while and this problem just appeared out of nowhere)

Here are some information about my setup.

> incus version
Client version: 6.19.1
Server version: 6.19.1
> incus profile show default 
config: {}
description: Default Incus profile
devices:
  eth0:
    name: eth0
    network: incusbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/instances/test-dhcp
project: default

> incus config show test-dhcp 
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu jammy amd64 (20260225_19:21)
  image.os: Ubuntu
  image.release: jammy
  image.serial: "20260225_19:21"
  image.type: squashfs
  image.variant: default
  volatile.base_image: 3d6afd29d3e67d81ca1851427c612728d026ed669705cbcd4eebb4a1e5944763
  volatile.cloud-init.instance-id: 5ef12620-54a2-468b-a20b-2d1bea053793
  volatile.eth0.host_name: vethdca01321
  volatile.eth0.hwaddr: 10:66:6a:5a:f5:fe
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: e041a21a-c44a-4c75-ac21-c6af2def624f
  volatile.uuid.generation: e041a21a-c44a-4c75-ac21-c6af2def624f
devices: {}
> incus info test-dhcp
Name: test-dhcp
Description: 
Status: RUNNING
Type: container
Architecture: x86_64
PID: 15250
Created: 2026/02/26 18:51 GMT
Last Used: 2026/02/26 18:55 GMT
Started: 2026/02/26 18:55 GMT

Resources:
  Processes: 13
  CPU usage:
    CPU usage (in seconds): 2
  Memory usage:
    Memory (current): 54.55MiB
    Swap (current): 4.00KiB
  Network usage:
    eth0:
      Type: broadcast
      State: UP
      Host interface: vethdca01321
      MAC address: 10:66:6a:5a:f5:fe
      MTU: 1500
      Bytes received: 0B
      Bytes sent: 8.33kB
      Packets received: 0
      Packets sent: 37
      IP addresses:
        inet6: fe80::1266:6aff:fe5a:f5fe/64 (link)


Do you have anything else that may be interfering with the firewall? Docker is the other most common source of conflicts.

Your comment made me realize that I had podman installed. I removed podman using dnf, rebooted but the same issue persists. I wonder though if removing it is sufficient to get rid of the shenanigans that either podman or docker introduce.

It should be sufficient and I’m not aware of podman causing the same kind of mess as Docker.

You’ll probably want to post your iptables -L -n -v and/or nft list ruleset output so we can see what may be getting in the way.