Incus container bridge issues

hello

I am trying to use a bridge so my containers can be accessed from other computers.

I am using Ubuntu 24.04 on the host and containers. I created this netplan config on the host:

network:
    ethernets:
        eno1:
            dhcp4: no
    bridges:
        br0:
            dhcp4: yes
            interfaces: [eno1]
    version: 2

This bridge seems to work on the host. It gets IP address from DHCP.


br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.224  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::583b:56ff:feba:914e  prefixlen 64  scopeid 0x20<link>
        ether 5a:3b:56:ba:91:4e  txqueuelen 1000  (Ethernet)
        RX packets 69950  bytes 5451247 (5.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 19904  bytes 2267768 (2.2 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 00:23:24:b4:d2:f9  txqueuelen 1000  (Ethernet)
        RX packets 84713  bytes 11508797 (11.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20039  bytes 2393210 (2.3 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 16  memory 0xdf100000-df120000

I created this profile and I attached it to container.

config: {}
description: Bridged networking LXD profile
devices:
  eno1:
    name: eno1
    nictype: bridged
    parent: br0
    type: nic
name: bridgeprofile
used_by:
- /1.0/instances/git

My networks:

+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
|   NAME   |   TYPE   | MANAGED |      IPV4       |           IPV6            | DESCRIPTION | USED BY |  STATE  |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| br0      | bridge   | NO      |                 |                           |             | 2       |         |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| docker0  | bridge   | NO      |                 |                           |             | 0       |         |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| eno1     | physical | NO      |                 |                           |             | 0       |         |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| incusbr0 | bridge   | YES     | 10.220.171.1/24 | fd42:2bba:ec6d:fc07::1/64 |             | 2       | CREATED |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+

But when I start the container it gets IP address from incus bridge incusbr0.

+------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| NAME |  STATE  |         IPV4         |                     IPV6                      |   TYPE    | SNAPSHOTS |
+------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| git  | RUNNING | 10.220.171.24 (eth0) | fd42:2bba:ec6d:fc07:216:3eff:fee2:7175 (eth0) | CONTAINER | 0         |
+------+---------+----------------------+-----------------------------------------------+-----------+-----------+

Am I doing something wrong here? I also tried using eth0 device in the bridge profile.
It didn’t work either. The container couldnt get IP address and it couldnt connect to network. :cry:

Relevant bits from container config:


 volatile.eno1.host_name: veth9f4c86d5
  volatile.eno1.hwaddr: 00:16:3e:fd:76:18
  volatile.eth0.host_name: veth63c3713d
  volatile.eth0.hwaddr: 00:16:3e:e2:71:75
...
profiles:
- default
- bridgeprofile

Those virtual interfaces appear in ifconfig output (on host), like this:

veth63c3713d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether c6:cf:98:35:e3:f0  txqueuelen 1000  (Ethernet)
        RX packets 40  bytes 3866 (3.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 28  bytes 3707 (3.7 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth9f4c86d5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether aa:b8:ec:f7:1d:91  txqueuelen 1000  (Ethernet)
        RX packets 15  bytes 1146 (1.1 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2366  bytes 163318 (163.3 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Can someone help me, please?

Welcome!

You are using the interface name eno1 for the container in the Incus profile. However, the default profile has the interface name eth0. This means that when you launch the container as follows, you get a container with two network interfaces, an eth0 and a eno1. And the eth0 is getting an IP address from incusbr0.

incus launch images:ubuntu/24.04/cloud mycontainer --profile default --profile bridgedprofile

You would need to use eth0 in the bridgedprofile. By doing so, the eth0 from the default profile is replaced by your new eth0 from the bridgedprofile.

1 Like

I created another profile with correct device name:

incus profile show bridgeprofile2
config: {}
description: Bridged networking LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
name: bridgeprofile2
used_by: []
➜  ~ incus -p default -p bridgeprofile2 launch images:alpine/3.20 alpine
Launching alpine
➜  ~ incus -p default -p bridgeprofile2 launch images:ubuntu/noble ubuntu
Launching ubuntu
➜  ~ incus list
+--------+---------+------+------+-----------+-----------+
|  NAME  |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+--------+---------+------+------+-----------+-----------+
| alpine | RUNNING |      |      | CONTAINER | 0         |
+--------+---------+------+------+-----------+-----------+
| ubuntu | RUNNING |      |      | CONTAINER | 0         |
+--------+---------+------+------+-----------+-----------+

They cant get IP address and they can’t access network.

root@ubuntu:~# ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
22: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:00:d3:70 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::216:3eff:fe00:d370/64 scope link
       valid_lft forever preferred_lft forever

Do you have an active firewall on the host?

1 Like

ufw is not active

sudo ufw status verbose
Status: inactive

I installed incus on my second server (HP) and incus bridge works just fine. It is using the same host OS, netplan config file and the same incus profile. It is baffling , isnt it? :man_shrugging:

network:
    ethernets:
        eno1:
            dhcp4: no
            dhcp6: no
    version: 2
    bridges:
        br0:
            dhcp4: yes
            interfaces: [eno1]

I had LXD installed on my Lenovo server. And there is docker engine running as well.
Maybe bridge doesnt work because of docker?

Server 1 ( Lenovo ):

+--------+---------+------+------+-----------+-----------+
|  NAME  |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+--------+---------+------+------+-----------+-----------+
| alpine | RUNNING |      |      | CONTAINER | 0         |
+--------+---------+------+------+-----------+-----------+
| ubuntu | RUNNING |      |      | CONTAINER | 0         |
+--------+---------+------+------+-----------+-----------+

Server 2 ( HP ):

+--------+---------+----------------------+------+-----------+-----------+
|  NAME  |  STATE  |         IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+--------+---------+----------------------+------+-----------+-----------+
| alpine | RUNNING | 192.168.1.157 (eth0) |      | CONTAINER | 0         |
+--------+---------+----------------------+------+-----------+-----------+
| ubuntu | RUNNING | 192.168.1.182 (eth0) |      | CONTAINER | 0         |
+--------+---------+----------------------+------+-----------+-----------+

I may have to reinstall host OS on Lenovo if nothing else helps :cry:

Yes the issue is that Docker sets the default policy for the FORWARD chain of the filter table to DROP.

You can confirm this by checking the output of the command iptables-save

*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0] # <-- Root cause of the issue
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
...

The workaround is to add two rules to the DOCKER-USER chain that explicitly ACCEPT packets to and from your bridge[1][2], for example

sudo iptables -I DOCKER-USER -i incusbr0 -j ACCEPT
sudo iptables -I DOCKER-USER -o incusbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

To make the rules persistent install the iptables-persistent package and add the following rules to /etc/iptables/rules.v4 and /etc/iptables/rules.v6

*filter
:DOCKER-USER - [0:0]
-A DOCKER-USER -i incusbr0 -j ACCEPT
-A DOCKER-USER -o incusbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -j RETURN
COMMIT

NOTE: in all of the examples listed above you will need to replace incusbr0 with the name of the bridge you are using (which in your case is br0)

[1] https://docs.docker.com/engine/network/packet-filtering-firewalls/#docker-and-iptables-chains
[2] https://linuxcontainers.org/incus/docs/main/howto/network_bridge_firewalld/#prevent-connectivity-issues-with-incus-and-docker

1 Like

thanks, it didnt occur to me to check iptables :man_facepalming: