Managed bridge default gateway

I created a bridged interface using incus with following configuration:

config:
  bridge.external_interfaces: enp7s0
  ipv4.address: 192.168.70.2/24
  ipv4.dhcp: "false"
  ipv4.nat: "true"
  ipv4.routes: 0.0.0.0/0
description: ""
name: incusbr0
type: bridge
used_by:
managed: true
status: Created
locations:
- none
project: default

This configuration yields the following output of ip route command:

default dev incusbr0 proto static scope link 
169.254.0.0/16 dev incusbr0 scope link metric 1000 
192.168.70.0/24 dev incusbr0 proto kernel scope link src 192.168.70.2 

How can I set the default route to a specific ip using incus? I couldn’t find any option related.
I would like the output of ip route be:

default via 192.168.70.1 dev incusbr0  proto static scope link

Thanks

Oh, I see, you’re trying to configure your host’s own networking through that bridge.

I wouldn’t really recommend doing that as anything wrong happening to Incus will leave you without networking on your machine.

Instead for such environments, it’s usually simpler to configure your OS networking to set up a bridge for you, then tell Incus about it to place its instances on it.

I don’t know what distro you’re using, but on Ubuntu, you’d do that through Netplan (/etc/netplan/), basically defining a bridge in there, with a static address and correct gateway, then after you’ve confirmed that your machine can boot and has functional connectivity that way, you’d add the network to Incus with:

incus network create my-bridge parent=my-bridge --type=physical

That then lets you configure your instances with a normal managed NIC:

eth0:
  type: nic
  name: eth0
  network: my-bridge

Hi Stéphane,

I’m using Debian 12 Linux with network configured by network-manager.
I implemented the steps suggested but I got a problem. When I apply that incus configurations, I got the host network reconfigured, it reset routing configurations, for instance, I have configured the a default gateway and when apply incus it cleared that configuration. Its very weird.

Hmm, that is a bit weird.

An alternative would be to skip the incus network create entirely and instead make your NIC:

eth0:
  type: nic
  name: eth0
  nictype: bridged
  parent: my-bridge

I have a similar requirement. Incus is running inside a VM (on vmware esx). We are not allowed to use PROMISCUOUS mode on the VMs interface and additionally the containers should use another network for outbound traffic than the incus host. So instead of giving every container a physical net interface, my idea was to use a separate single vm interface, keep the containers in the default bridge, add some proxy devices using the second vm interface as listen address (e.g. for nginx) and define a rule for outbound traffic from the incus bridge using also the second interface.
Is this a valid idea/scenario? Could someone help me to define the snat rule to force traffic from the incus bridge connected containers to go thru the second interface (eth1)?

incus seems to create firewall rules to nat the internal traffic using masquerade but without a specific interface:

sudo nft list ruleset
table inet incus {
        chain pstrt.lxdbr0 {
                type nat hook postrouting priority srcnat; policy accept;
                ip saddr 10.37.30.0/24 ip daddr != 10.37.30.0/24 masquerade
        }

Shouldn’t it possible to add interface eth1 to that rule?

EDIT: I found ipv4.nat.address on the bridge config params. Maybe that’s what I’m looking for? I tried to set the second interface ip but not yet with success.
EDIT2: How is bridge.external_interfaces intended to work? Doku sais it needs to be unconfigured to work but I then do not understand how network is configured then …

I manually modified the nft postrouting rule for testing to use eth1

        chain pstrt.lxdbr0 {
                type nat hook postrouting priority srcnat; policy accept;
                oifname "eth1" ip saddr 10.37.30.0/24 ip daddr != 10.37.30.0/24 masquerade
        }

and the traffic correctly leaves the eth1 interface and the correct nat ip but for any reason the packets do not return to the container.

Has anybody the use case with more than one network interface on the host and a hint how to configure the incus bridge to use a specific interface?

That’s how I run containers on my own (physical) servers and this is my favorite config. On (vmware) vms there is the issue with the restriction not to allow more than one mac on a interface (promiscuous mode has several issues). For incus running in a VM binding an incusbridge to a host interface / gateway would help to connect containers to different networks (without routing).