Incus container in Bridge-mode with local IP-addr dont gets dhcp leases directly

Hello community,

I plan to change from lxc to incus, but I have actually big network issues in my fresh incus installations.
Both are using fresh debian 13 systems (one headless[server] + one as testsystem@laptop).
My actually main issue is dropped when I try to use new containers with a bridged network.
Generally is my goal to use a bridge to directly get a dhcp release from the local network.

Ive installed incus with the following commands:

apt install incus incus-client bridge-utils
adduser $user incus-admin
reboot now
incus admin init

ufw rules

sudo ufw allow in on incusbr0
sudo ufw route allow in on incusbr0
sudo ufw route allow out on incusbr0
# set incus firewall to false
    incus network set incusbr-001 ipv6.firewall false
    incus network set incusbr-001 ipv4.firewall false

config host

ip a:

lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether alias_for_macaddr brd ff:ff:ff:ff:ff:ff
altname enp3s0
altname enx0cc47a869ff6
inet 192.168.28.10/24 brd 192.168.28.255 scope global dynamic noprefixroute eno1
valid_lft 5483sec preferred_lft 4583sec
inet6 alias_for_addr scope link
valid_lft forever preferred_lft forever
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether alias_for_macaddr brd ff:ff:ff:ff:ff:ff
altname enp4s0
altname enx0cc47a869ff7
4: incusbr-001: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether alias_for_macaddr brd ff:ff:ff:ff:ff:ff
inet 10.140.64.1/24 scope global incusbr-001
valid_lft forever preferred_lft forever
inet6 alias_for_addr/64 scope global
valid_lft forever preferred_lft forever
inet6 alias_for_addr/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
20: vethb52e2c7c@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master incusbr-001 state UP group default qlen 1000
link/ether alias_for_macaddr brd ff:ff:ff:ff:ff:ff link-netnsid 0
23: tap091611b5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master incusbr-001 state UP group default qlen 1000
link/ether alias_for_macaddr brd ff:ff:ff:ff:ff:ff
25: veth9d773da3@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master incusbr-001 state UP group default qlen 1000
link/ether alias_for_macaddr brd ff:ff:ff:ff:ff:ff link-netnsid 2

incus network show incusbr-001

config:
ipv4.address: 10.140.64.1/24
ipv4.firewall: “false”
ipv4.nat: “true”
ipv6.address: alias_for_addr::1/64
ipv6.firewall: “false”
ipv6.nat: “true”
description: “”
name: incusbr-001
type: bridge
used_by:

* /1.0/instances/001-smb
* /1.0/instances/002-test
* /1.0/instances/003-test
* /1.0/profiles/default
  managed: true
  status: Created
  locations:
* none
  project: default

incus profile show default

config: {}
description: Default Incus profile
devices:
eno1:
name: eno1
network: incusbr-001
type: nic
root:
path: /
pool: 001_sys-rel
type: disk
name: default
used_by:

* /1.0/instances/001-smb
* /1.0/instances/002-test
* /1.0/instances/003-test

create two test containers

incus launch images:debian/13 001-smb # debian
incus launch images:ubuntu/noble 002-test # ubuntu

incus console 002-test --type=console

On containers side

1st try - systemd-networkd:

ip a gives:

lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
24: eno1@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether alias_for_macaddr brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 alias_for_addr:6aff:fee5:3c2f/64 scope global dynamic mngtmpaddr
valid_lft forever preferred_lft forever
inet6 falias_for_addr/64 scope link
valid_lft forever preferred_lft forever

there are in both containers a vth created like above one: eno1@if22,
but only the network name eno1 with commands link ip are excepted

default there is only systemd-networkd installed => config are:

default are only /etc/systemd/network/eth0.network:

\[Match\]
Name=eth0


    [Network]
    DHCP=true

    [DHCPv4]
    UseDomains=true
    UseMTU=true

    [DHCP]
    ClientIdentifier=mac

I tryed to create

    /etc/systemd/network/eno1.network

or

    /etc/systemd/network/eno@if22.network

with same content (tryed it also with “eno1” only):

 [Match]
    Name=eno1@if25

    [Network]
    DHCP=true

    [DHCPv4]
    UseDomains=true
    UseMTU=true

    [DHCP]
    ClientIdentifier=mac

restart

systemctl restart systemd-networkd

config dns

nano /etc/resolv.conf
nano /etc/systemd/resolv.conf

set ip from local network + briged incus network

restart

systemctl restart systemd-resoveconf

but I dont get an dhcp release / ip

2nd try - setup manually with ip link:

with local network

link set eno1 down
ip addr add 192.168.28.12/24 dev eno1
link set eno1 up

with bridged network

link set eno1 down
ip addr add 10.140.64.12/24 dev eno1
link set eno1 up

But all my tries are without an geting an ip address.
But if i using a fresh installed VM from incus, it get directly an ip address.

Can anyone explain where are my fallacies and how can i fix these issue?

3rd try - manual packages install with networkmanager + ifupdown

systemctl disable systemd-networkd
systemctl disable systemd-networkd.socket
systemctl stop systemd-networkd
systemctl stop systemd-networkd.socket
dpkg -i networkmanager.deb # + fucking dependencies
dpkg -i networkmanager.deb # after some trys, i figure out, network manager needs ifupdown => see network manager config, for working

be sure all are enabled and started

systemctl enable NetworkManager
systemctl enable ifupdown
systemctl start NetworkManager
systemctl start ifupdown

config

nano /etc/network/interfaces
auto eno1
allow-hotplug eno1
iface eno1 inet dhcp

setup

ifup eno1

tada => connection is established

I am really irritated…

But the containers are only gets its IPs from the virtual incusbr-001 bridged network.
Is it only possible to get an address from the local network If I use the “physical” mode?
Generraly its not a problem for me, but its good to know If i will use one ethernet port for both, the brige with localaddr + connection(like ssh) to the server [dhcp leases for containers and one for the server on the same physical port].
Is there a way to do this?

Is there a good book or pdf available to understanding possible configurations with incus?
I only can find multiple virtualization books, but these are handle a bunch of virt technics like libvirt, quemu, and so on, but no book goes directly deep into the incus configurations.

Thank you for your help.

Edit 1-2: letter corrections

You were totally wrong. You should create a bridge in host without incus involved. Read this post:

More specifically:

  • if you create an incus managed bridge, then it will have its own subnet with routing and NAT, and its own DHCP service managed by incus
  • if you use an existing bridge (“unmanaged”) then the instances will just connect directly to that bridge, and any router and DHCP will be whatever is upstream on that bridge

I think the second option is what you want. In that case, it should look like this:

brian@lxd1:~$ incus network list
+-------+----------+---------+------+------+-------------+---------+-------+
| NAME  |   TYPE   | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY | STATE |
+-------+----------+---------+------+------+-------------+---------+-------+
| br0   | bridge   | NO      |      |      |             | 14      |       |
+-------+----------+---------+------+------+-------------+---------+-------+

brian@lxd1:~$ incus profile show default
description: Default Incus profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: zfs
    type: disk
name: default
used_by:
... etc

Or you can create a different named profile, say “br0”, and assign this to instances you want to be attached directly to the bridge.

Either way, br0 is a bridge you already created, e.g. using netplan or ifupdown configs.