I’m new to Incus, and I’m trying to narrow down the incredibly wide range of network types available to those that would actually work for my use case, and from among those pick the simplest (best) choice. There’s a lot to go through, tons of reference documentation, but not a lot of outcome-based documentation… so I’m hoping someone can help point me in the right direction. I have found several examples of similar setups, but nothing yet that covers my exact use case, and so far my attempts to combine examples hasn’t panned out.
My requirements and assumptions:
- I need to put a few containers on publicly routed addresses (e.g. mail server, application proxies for other containers, etc.). If I’m to avoid routing entirely new address blocks, that implies to me that I need to put these containers on the same L2 as the host.
- We do static v4 and v6 configuration on the host for our bare metal and VMs. I would like to either continue that for our containers (using the instance config), or give Incus a /28 or /29 out of the existing routed /24 (as well as a subset of our v6 block) and let it take care of address assignment and DNS updates using Incus Zones. That implies to me that I need a “Managed Network” type.
- I’m trying to keep everything as simple as possible, and not engineer for every possible future, so I have avoided looking at things like OVN. If I need something like that I’m not opposed, it just seems like overkill at the moment.
This has led me to trying to use a Bridge network type, but I’m not getting any packets moving even between the container and host, let alone the container and the rest of the network. It’s entirely possible this does not work the way I think it does, but I’m including config details below in case the problem is with my config and not my base assumptions. I note there doesn’t seem to be any way in the Bridge network options to indicate what the parent interface should be, so it seems likely this is either not the right choice or I’m missing another piece I need. However, the How to configure your firewall link in the Bridge network documentation seems to imply a certain amount of “just works,” provided the right config is in place.
I have lots of v6 address space available I can be flexible with, but limited v4. It’s best if I can just reserve a smaller block from our currently routed /24, but if I absolutely have to I can route a new /24 just for the container infrastructure. That would open up some additional flexibility such as putting containers on their own VLAN, but increases the total complexity and potentially “wastes” a ton of addresses, since I only need a small number with direct access to the wider Internet… so I haven’t looked much at the configs that would enable that type of setup.
Does this seem like I’m going down the right path? If so, configs below. If not, I’m interested in advice and the rest of this post can be ignored.
Thanks in advance for any assistance!
This is all on Debian 12 using images:debian/12
containers from the default repository, and incus 6.0.1 from Debian bookworm-backports/main
.
For historical reasons, the host has bridge interfaces configured. This was for libvirt
& kvm
which won’t be required after the move to Incus, so we can unconfigure it if necessary.
From /etc/network/interfaces
:
auto br0
iface br0 inet static
bridge_ports eno1
bridge_stp on
bridge_waitport 30
bridge_fd 15
bridge_maxwait 60
address 192.0.2.65/24
gateway 192.0.2.1
iface br0 inet6 static
address 2001:db8::65/64
gateway 2001:db8::1
And the Incus network. Ignore the odd interface numbering… this was created after much testing and experimentation. It’ll be cleaned up for production.
% incus network show incusbr1
config:
ipv4.address: 192.0.2.161/24
ipv4.dhcp.gateway: 192.0.2.1
ipv4.dhcp.ranges: 192.0.2.170-192.0.2.190
ipv4.firewall: "true"
ipv6.address: 2001:db8::161/64
ipv6.dhcp.ranges: 2001:db8::170-2001:db8::190
ipv6.dhcp.stateful: "true"
ipv6.firewall: "true"
description: ""
name: incusbr1
type: bridge
used_by:
- /1.0/instances/testbr1
- /1.0/profiles/managedbridge
managed: true
status: Created
locations:
- none
project: default
And a simple profile from which each instance should inherit its network config.
% incus profile show managedbridge
config: {}
description: ""
devices:
eth0:
network: incusbr1
type: nic
name: managedbridge
used_by:
- /1.0/instances/testbr1
project: default
% incus config show testbr1
architecture: x86_64
config:
image.architecture: amd64
image.description: Debian bookworm amd64 (20240625_21:51)
image.os: Debian
image.release: bookworm
image.serial: "20240625_21:51"
image.type: squashfs
image.variant: default
volatile.base_image: 829fdd871b9800499b796be23199e7def47108cc9ab04a401c9465c5a5ac08d8
volatile.cloud-init.instance-id: 16ac5316-f21f-45c6-b554-c6b93e165108
volatile.eth0.host_name: vethdb9c131a
volatile.eth0.hwaddr: 00:16:3e:87:f0:c3
volatile.eth0.name: eth0
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.idmap: '[]'
volatile.last_state.power: RUNNING
volatile.uuid: e887cc09-7665-4483-8cd4-718855b72b92
volatile.uuid.generation: e887cc09-7665-4483-8cd4-718855b72b92
devices: {}
ephemeral: false
profiles:
- default
- managedbridge
stateful: false
description: ""
% incus list testbr1
+---------+---------+---------------------+--------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------+---------+---------------------+--------------------------+-----------+-----------+
| testbr1 | RUNNING | 192.0.2.171 (eth0) | 2001:db8::18c (eth0). | CONTAINER | 0 |
+---------+---------+---------------------+--------------------------+-----------+-----------+
As you can see above, I have explicitly set ipv4.firewall
and ipv6.firewall
just to make sure Incus is putting its rules in place.
% sudo nft list tables
table inet incus
% sudo nft list table inet incus
table inet incus {
chain fwd.incusbr1 {
type filter hook forward priority filter; policy accept;
ip version 4 oifname "incusbr1" accept
ip version 4 iifname "incusbr1" accept
ip6 version 6 oifname "incusbr1" accept
ip6 version 6 iifname "incusbr1" accept
}
chain in.incusbr1 {
type filter hook input priority filter; policy accept;
iifname "incusbr1" tcp dport 53 accept
iifname "incusbr1" udp dport 53 accept
iifname "incusbr1" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
iifname "incusbr1" udp dport 67 accept
iifname "incusbr1" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
iifname "incusbr1" udp dport 547 accept
}
chain out.incusbr1 {
type filter hook output priority filter; policy accept;
oifname "incusbr1" tcp sport 53 accept
oifname "incusbr1" udp sport 53 accept
oifname "incusbr1" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
oifname "incusbr1" udp sport 67 accept
oifname "incusbr1" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
oifname "incusbr1" udp sport 547 accept
}
}
% sysctl net.ipv4.conf.all.forwarding net.ipv6.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1
net.ipv6.conf.all.forwarding = 1
The container is not pingable from the host on v4 or v6.