Selecting a network configuration approach

I’m new to Incus, and I’m trying to narrow down the incredibly wide range of network types available to those that would actually work for my use case, and from among those pick the simplest (best) choice. There’s a lot to go through, tons of reference documentation, but not a lot of outcome-based documentation… so I’m hoping someone can help point me in the right direction. I have found several examples of similar setups, but nothing yet that covers my exact use case, and so far my attempts to combine examples hasn’t panned out.

My requirements and assumptions:

  1. I need to put a few containers on publicly routed addresses (e.g. mail server, application proxies for other containers, etc.). If I’m to avoid routing entirely new address blocks, that implies to me that I need to put these containers on the same L2 as the host.
  2. We do static v4 and v6 configuration on the host for our bare metal and VMs. I would like to either continue that for our containers (using the instance config), or give Incus a /28 or /29 out of the existing routed /24 (as well as a subset of our v6 block) and let it take care of address assignment and DNS updates using Incus Zones. That implies to me that I need a “Managed Network” type.
  3. I’m trying to keep everything as simple as possible, and not engineer for every possible future, so I have avoided looking at things like OVN. If I need something like that I’m not opposed, it just seems like overkill at the moment.

This has led me to trying to use a Bridge network type, but I’m not getting any packets moving even between the container and host, let alone the container and the rest of the network. It’s entirely possible this does not work the way I think it does, but I’m including config details below in case the problem is with my config and not my base assumptions. I note there doesn’t seem to be any way in the Bridge network options to indicate what the parent interface should be, so it seems likely this is either not the right choice or I’m missing another piece I need. However, the How to configure your firewall link in the Bridge network documentation seems to imply a certain amount of “just works,” provided the right config is in place.

I have lots of v6 address space available I can be flexible with, but limited v4. It’s best if I can just reserve a smaller block from our currently routed /24, but if I absolutely have to I can route a new /24 just for the container infrastructure. That would open up some additional flexibility such as putting containers on their own VLAN, but increases the total complexity and potentially “wastes” a ton of addresses, since I only need a small number with direct access to the wider Internet… so I haven’t looked much at the configs that would enable that type of setup.

Does this seem like I’m going down the right path? If so, configs below. If not, I’m interested in advice and the rest of this post can be ignored.

Thanks in advance for any assistance!

This is all on Debian 12 using images:debian/12 containers from the default repository, and incus 6.0.1 from Debian bookworm-backports/main.

For historical reasons, the host has bridge interfaces configured. This was for libvirt & kvm which won’t be required after the move to Incus, so we can unconfigure it if necessary.

From /etc/network/interfaces:

auto br0
iface br0 inet static
	bridge_ports eno1
	bridge_stp on
	bridge_waitport 30
	bridge_fd 15
	bridge_maxwait 60

iface br0 inet6 static
	address 2001:db8::65/64
	gateway 2001:db8::1

And the Incus network. Ignore the odd interface numbering… this was created after much testing and experimentation. It’ll be cleaned up for production.

% incus network show incusbr1
  ipv4.firewall: "true"
  ipv6.address: 2001:db8::161/64
  ipv6.dhcp.ranges: 2001:db8::170-2001:db8::190
  ipv6.dhcp.stateful: "true"
  ipv6.firewall: "true"
description: ""
name: incusbr1
type: bridge
- /1.0/instances/testbr1
- /1.0/profiles/managedbridge
managed: true
status: Created
- none
project: default

And a simple profile from which each instance should inherit its network config.

% incus profile show managedbridge
config: {}
description: ""
    network: incusbr1
    type: nic
name: managedbridge
- /1.0/instances/testbr1
project: default

% incus config show testbr1
architecture: x86_64
  image.architecture: amd64
  image.description: Debian bookworm amd64 (20240625_21:51)
  image.os: Debian
  image.release: bookworm
  image.serial: "20240625_21:51"
  image.type: squashfs
  image.variant: default
  volatile.base_image: 829fdd871b9800499b796be23199e7def47108cc9ab04a401c9465c5a5ac08d8 16ac5316-f21f-45c6-b554-c6b93e165108
  volatile.eth0.host_name: vethdb9c131a
  volatile.eth0.hwaddr: 00:16:3e:87:f0:c3 eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]' '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.uuid: e887cc09-7665-4483-8cd4-718855b72b92
  volatile.uuid.generation: e887cc09-7665-4483-8cd4-718855b72b92
devices: {}
ephemeral: false
- default
- managedbridge
stateful: false
description: ""

% incus list testbr1
|  NAME   |  STATE  |        IPV4         |           IPV6           |   TYPE    | SNAPSHOTS |
| testbr1 | RUNNING | (eth0)  | 2001:db8::18c (eth0).    | CONTAINER | 0         |

As you can see above, I have explicitly set ipv4.firewall and ipv6.firewall just to make sure Incus is putting its rules in place.

% sudo nft list tables
table inet incus

% sudo nft list table inet incus
table inet incus {
	chain fwd.incusbr1 {
		type filter hook forward priority filter; policy accept;
		ip version 4 oifname "incusbr1" accept
		ip version 4 iifname "incusbr1" accept
		ip6 version 6 oifname "incusbr1" accept
		ip6 version 6 iifname "incusbr1" accept

	chain in.incusbr1 {
		type filter hook input priority filter; policy accept;
		iifname "incusbr1" tcp dport 53 accept
		iifname "incusbr1" udp dport 53 accept
		iifname "incusbr1" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
		iifname "incusbr1" udp dport 67 accept
		iifname "incusbr1" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
		iifname "incusbr1" udp dport 547 accept

	chain out.incusbr1 {
		type filter hook output priority filter; policy accept;
		oifname "incusbr1" tcp sport 53 accept
		oifname "incusbr1" udp sport 53 accept
		oifname "incusbr1" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
		oifname "incusbr1" udp sport 67 accept
		oifname "incusbr1" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
		oifname "incusbr1" udp sport 547 accept

% sysctl net.ipv4.conf.all.forwarding net.ipv6.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1
net.ipv6.conf.all.forwarding = 1

The container is not pingable from the host on v4 or v6.

The easiest choice would be to use proxy devices. That is, the containers with the services will still be on a private bridge but you would make available the corresponding ports through proxy devices.

Mail server example. This spawns a separate Incus process that listens on the host at port 587 and connects to the container to port 587 (on loopback).

incus config device add mysmtpcontainer myport587 proxy listen=tcp: connect=tcp:

The downside with that setup is that the service in the container will not be able to figure out the real IP address of the client. While it greatly simplifies GDPR, it’s not desirable in most cases.

The solution is to use the Incus proxy device along with the PROXY protocol.
This adds a bit of complexity but once you setup, it works fine.

To enable the PROXY protocol with the Incus device, you add proxy_protocol=true to the command line. In doing so, the service in the container should support the PROXY protocol. Postfix, for example, supports the PROXY protocol.

incus config device add mysmtpcontainer myport587 proxy proxy_protocol=true listen=tcp: connect=tcp:

Such a setup with PROXY protocol is common when you want to host many websites in separate containers. You use an additional container that serves as the proxy, and install nginx (or any other) as a reverse proxy.

Does incus network list show br0 as managed: false?

If so, you can attach containers directly to it. For example, create a profile called br0 (incus profile copy default br0; incus profile edit br0) and tweak it to look like this:

config: {}
description: Bridge to backbone
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
    path: /
    pool: default
    type: disk
name: br0
project: default

Then set your container’s profile to br0 (when launching or afterwards).

The downside with that setup is that the service in the container will not be able to figure out the real IP address of the client. While it greatly simplifies GDPR, it’s not desirable in most cases.

I can look at this, but it seems like it would be a problem for a mail server, which needs to know the remote address in order to be able to do anti spam filtering.

If so, you can attach containers directly to it. For example, create a profile called br0 (incus profile copy default br0; incus profile edit br0) and tweak it to look like this:

That leaves the container on an unmanaged network, though… so no Incus zones, or address assignment. The device configuration for nictype: bridged doesn’t have any options for setting IP addresses, and the container images don’t have the usual Debian network setup in /etc/network/.

Correct. You asked for simple :slight_smile: - and you did say you currently work this way.

Addresses are assigned statically inside the guest container, or if you want to use DHCP then on your upstream DHCP server on that subnet.

Probably because without those files it defaults to DHCP on eth0.

You can avoid creating those configs manually by using cloud-init, which is very amenable to scripting.

# incus launch -c"version: 2
    dhcp4: false
    accept-ra: false
      - $ADDRESS4
      - $ADDRESS6
    gateway4: $GATEWAY4
    gateway6: $GATEWAY6
      search: [$DOMAIN]
      addresses: [$NAMESERVERS]
" -c user.user-data="#cloud-config
disable_root: false
users: []
" -p br0 images:ubuntu/24.04/cloud foo

But if you want incus to manage the networking, then yes, the container or VM needs to sit on an incus bridge. In that case, you need to arrange for routing of inbound traffic.

Regular static routing works fine for that (e.g. give the incus bridge a /28 of real public IPs, and then add a static route on the upstream router) (*). However, your container is then tied to that particular host, unless you start messing with overlay networks and the like, which I’ve not attempted.

You could assign a public loopback address to each container, and then announce that into your IGP (or use static routes). That’s a good way to use individual IP addresses efficiently.

As already been suggested, you can use one of the incus proxy modes, and external clients connect to the container host’s IP instead of the container itself. Again, you’re pretty much tied to the incus host.

Or depending on the use case, you can run your own upstream HTTP or SNI reverse proxy.

(*) You don’t want to take a /28 which overlaps with the /24 if it is already live on a subnet, or you’re going to have to start doing proxy ARP nonsense.

As an example, PostFix supports the PROXY protocol, therefore there is a way for the mail server to get the remote address.

Or if you use the NAT mode of proxying, then the source IP address is preserved.

1 Like