A recent system update may have potentially messed something up, as this was working fine a few days ago. I can reach the host system from inside a container, but not outside the host. Running Rocky 9.6, Incus 6.8.
I created a fresh container to confirm the behavior:
incus launch images:rockylinux/9 nettest
incus exec nettest -- /bin/bash
[root@nettest ~]# ping 10.31.120.1
PING 10.31.120.1 (10.31.120.1) 56(84) bytes of data.
64 bytes from 10.31.120.1: icmp_seq=1 ttl=64 time=0.072 ms
[root@nettest ~]# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
^C
--- 1.1.1.1 ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 6134ms
Config info:
config: {}
description: Default Incus profile
devices:
eth0:
name: eth0
network: incusbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
used_by:
- /1.0/instances/qbit-web
- /1.0/instances/nettest
project: default
Network info:
config:
ipv4.address: 10.31.120.1/24
ipv4.nat: "true"
ipv6.address: fd42:eef4:9961:ab01::1/64
ipv6.nat: "true"
description: ""
name: incusbr0
type: bridge
used_by:
- /1.0/instances/nettest
- /1.0/instances/qbit-web
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
project: default
And ping on the host works:
[rackpc:06:04:09:~]> ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=15.4 ms
I would really appreciate help debugging this! Let me know what other info I can provide.
It looks like incusbr0 NAT might be messed up?
[rackpc:06:27:11:~]> sudo iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 169K packets, 10M bytes)
pkts bytes target prot opt in out source destination
167K 10M DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 522 packets, 42620 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 542 packets, 43808 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
4 240 MASQUERADE all -- * !br-c5ffa6f5afed 172.18.0.0/16 0.0.0.0/0
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- br-c5ffa6f5afed * 0.0.0.0/0 0.0.0.0/0
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:5006 to:172.17.0.2:5006
0 0 DNAT tcp -- !br-c5ffa6f5afed * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8555 to:172.18.0.2:8555
0 0 DNAT udp -- !br-c5ffa6f5afed * 0.0.0.0/0 0.0.0.0/0 udp dpt:8555 to:172.18.0.2:8555
23 1380 DNAT tcp -- !br-c5ffa6f5afed * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8971 to:172.18.0.2:8971
[rackpc:06:27:14:~]> sudo nft list table nat
# Warning: table ip nat is managed by iptables-nft, do not touch!
table ip nat {
chain DOCKER {
iifname "docker0" counter packets 0 bytes 0 return
iifname "br-c5ffa6f5afed" counter packets 0 bytes 0 return
iifname != "docker0" tcp dport 5006 counter packets 0 bytes 0 dnat to 172.17.0.2:5006
iifname != "br-c5ffa6f5afed" tcp dport 8555 counter packets 0 bytes 0 dnat to 172.18.0.2:8555
iifname != "br-c5ffa6f5afed" udp dport 8555 counter packets 0 bytes 0 dnat to 172.18.0.2:8555
iifname != "br-c5ffa6f5afed" tcp dport 8971 counter packets 23 bytes 1380 dnat to 172.18.0.2:8971
}
chain PREROUTING {
type nat hook prerouting priority dstnat; policy accept;
fib daddr type local counter packets 166910 bytes 10021983 jump DOCKER
}
chain OUTPUT {
type nat hook output priority dstnat; policy accept;
ip daddr != 127.0.0.0/8 fib daddr type local counter packets 0 bytes 0 jump DOCKER
}
chain POSTROUTING {
type nat hook postrouting priority srcnat; policy accept;
ip saddr 172.17.0.0/16 oifname != "docker0" counter packets 0 bytes 0 masquerade
ip saddr 172.18.0.0/16 oifname != "br-c5ffa6f5afed" counter packets 4 bytes 240 masquerade
}
}
Ended up being docker. Disabled the FORWARD chain policy of DROP by following this: Packet filtering and firewalls | Docker Docs
1 Like
candlerb
(Brian Candler)
August 10, 2025, 9:10am
4
njvander12:
Ended up being docker
Yes, I recommend you don’t incus and docker on the same host; if you do, you’ll have to sort out the networking breakage.
1 Like