Incus Container no longer routing when using br0

Greetings:

Sorry for the ambiguous subject but I am not sure how to describe it. Basically, I run the following setup on my host (Debian 12):

/etc/network/interfaces:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
iface enp42s0 inet manual

# Bridge
auto br0
iface br0 inet static
    bridge_ports enp42s0.100 dhcp
	address 10.100.0.198/26
	broadcast 10.100.0.255
	gateway 10.100.0.193
	dns-nameservers 10.0.0.2

My container is attached to br0 as follows:
incus config device add graynode-1 eth0 nic nictype=bridged parent=br0

Previously, this setup work. The DNS Server can “see” the container - it knows its MAC and provides an IP. However, while the container does get an IP, it can no longer reach the outside world/lan, and the outside world/lan can not reach the container.

Any ideas?

Oh, here is the container info:

╰─  incus info graynode-1                                                                                                                                                                                                                    1 ↵
Name: graynode-1
Description: 
Status: RUNNING
Type: container
Architecture: x86_64
PID: 3251820
Created: 2024/12/01 17:45 EST
Last Used: 2025/01/11 20:12 EST
Started: 2025/01/11 20:12 EST

Resources:
  Processes: 66
  Disk usage:
    root: 185.59GiB
  CPU usage:
    CPU usage (in seconds): 16
  Memory usage:
    Memory (current): 137.02MiB
  Network usage:
    eth0:
      Type: broadcast
      State: UP
      Host interface: vethe2c85841
      MAC address: 00:16:3e:4d:b4:de
      MTU: 1500
      Bytes received: 1.18kB
      Bytes sent: 12.54kB
      Packets received: 4
      Packets sent: 64
      IP addresses:
        inet:  10.100.0.232/26 (global)
        inet6: fe80::f406:46bc:58b0:7cd5/64 (link)
    lo:
      Type: loopback
      State: UP
      MTU: 65536
      Bytes received: 233.04kB
      Bytes sent: 233.04kB
      Packets received: 2384
      Packets sent: 2384
      IP addresses:
        inet:  127.0.0.1/8 (local)
        inet6: ::1/128 (local)

I’d check firewalling. Depending on system configuration, firewalling can apply to bridges.

Do you mean UFW? Its disabled. If you mean my firewall device, no rules/policies would prevent access (especially to intra-vlan ips).

It really feels like it’s a routing issue. Lxdbr0, when applied, routes fine

Maybe check iptables -L -n -v and nft list ruleset just to be sure.

But otherwise, something that may be worth trying is starting two instances and see if they can communicate with each other. If they can and only can’t reach the outside, then it could mean that your physical network is enforcing something like a single-MAC policy on the physical switch port or something along those lines.

Output from the above commands:

root@ryzen7-3700x:~# nft list ruleset
table inet incus {
	chain pstrt.incusbr0 {
		type nat hook postrouting priority srcnat; policy accept;
		ip saddr 10.166.123.0/24 ip daddr != 10.166.123.0/24 masquerade
		ip6 saddr fd42:327b:ab78:6ec0::/64 ip6 daddr != fd42:327b:ab78:6ec0::/64 masquerade
	}

	chain fwd.incusbr0 {
		type filter hook forward priority filter; policy accept;
		ip version 4 oifname "incusbr0" accept
		ip version 4 iifname "incusbr0" accept
		ip6 version 6 oifname "incusbr0" accept
		ip6 version 6 iifname "incusbr0" accept
	}

	chain in.incusbr0 {
		type filter hook input priority filter; policy accept;
		iifname "incusbr0" tcp dport 53 accept
		iifname "incusbr0" udp dport 53 accept
		iifname "incusbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
		iifname "incusbr0" udp dport 67 accept
		iifname "incusbr0" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
		iifname "incusbr0" udp dport 547 accept
	}

	chain out.incusbr0 {
		type filter hook output priority filter; policy accept;
		oifname "incusbr0" tcp sport 53 accept
		oifname "incusbr0" udp sport 53 accept
		oifname "incusbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
		oifname "incusbr0" udp sport 67 accept
		oifname "incusbr0" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
		oifname "incusbr0" udp sport 547 accept
	}
}
root@ryzen7-3700x:~# iptables -L -n -v
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
root@ryzen7-3700x:~#

The only thing that changes was the host system upgraded to Debian 12.9 (which is a big “only”; sorry I forgot to include it earlier)

There’s no rejection/drop rules in there so it’s unlikely to be the firewall.

It may be useful to tcpdump both br0 and the outside physical interface to see if the system is ending packets out at all.

So the container in question is an opensearch node and I have two other data nodes up and running. As such, I took the easy way out and deleted the problematic node/container and created a new node and container with the same config.

Everything is working as intended and my shards are balancing :man_shrugging:

Ghost in the machine, perhaps? Irrespective, thank you for your help!