Incus networking for network engineers

Is there any way to build out Incus networking in the same way a network engineer builds out switches, switchports, VLANs, trunks - without automatic setup of DNSMASQ and NAT?

I’m setting up Incus on Debian 12 in a VMware vCenter virtual machine connected to a distributed port group (VLAN). I want to interconnect Incus containers through the distributed switch into an NSX underlay/overlay infrastructure and do not want to wrestle with NAT and DNSMASQ or any other service that Incus configures automatically.

Is there any way to build up Incus networking like one builds a Cisco infrastructure?

You can have an unmanaged bridge (OS created) with vlan_filtering enabled on it, at which point you can attach Incus instances do whatever VLANs you want.

Looks like:

incus config device add MY-INSTANCE eth0 nic nictype=bridged name=eth0 parent=br0 vlan=1000 vlan.tagged=2000,2001

Which will then get you a native VLAN of 1000 and tagged VLANs for ID 2000 and 2001 in the instance.

In this case, you do need to configure br0 on your system through your OS’ network management tool (systemd-network, NetworkManager, netplan, …) and need to ensure that the vlan_filtering flag is properly set and that the uplink device for your bridge (usually enpXs0 device) has all the correct tags set on it too.

On Ubuntu with netplan it looks something like:

  bridges:
    # Main bridge
    br0:
      interfaces:
        - enp5s0
cat /etc/systemd/network/10-netplan-enp5s0.network.d/vlan.conf
[BridgeVLAN]
VLAN=1000
[BridgeVLAN]
VLAN=2000
[BridgeVLAN]
VLAN=2001
cat /etc/systemd/network/10-netplan-br0.netdev.d/vlan.conf 
[NetDev]
Name=br0
Kind=bridge

[Bridge]
MulticastSnooping=false
VLANFiltering=true
3 Likes

Awesome, thank you Stéphane. I’ll give that a go.

Any plans to add a capability into the Incus CLI that can setup a managed bridge interconnect that trunks VLANs out through a host NIC? Such that Incus can create and control a switch that can interconnect Incus Containers to something similar to a Port Group (VLAN) on a VMware Distributed Switch, with the ability to trunk and aggregate uplinks through the host NICs?

VMware ESXi can setup multiple 10G NICs for a VM running an Incus host and such an Incus bridge could punt the VLAN frames into an Overlay network. To do all of this within the Incus CLI would be next level and would mean no messing with Linux networking (Cisco’s IOS CLI is, for the most part, very logical and intuitive - especially the contextual help. I’ve not come across any CLI help system that can match Cisco’s).

Continuing this thought, could Incus interface directly with OVS APIs to extend this management into an OVN infrastructure, at least as far as an OVS? And visa versa from the OVN side, where OVN network engineers could setup Incus networking from an OVS CLI?

One of the things very obvious about Linux networking (at least to me as a network centric admin) is that the network configuration is approached from the point of view of the host/access layer - looking from the host in towards the network. Reinforcement of that perspective seems to come from the fact that a host NIC and its attached bridge interface are not switchports and cannot be configured like a true switchport.

Maybe the answers will become clearer as I work with Incus and NSX and with OVN infrastructure. Cheers.

This would be a great feature, especially for those wanting to move away from VMware and keep there very speedy existing switches. Love the Private Cloud feature in Incus but becomes cumbersome when I just want to setup Kubernetes cluster VMs and MetalLB/BGP.

Update: stgraber’s solution works great. Just make sure to reboot after creating the systemd files.

1 Like

I would like to convert my home server config to using a VLAN-aware bridge, but I’m stuck at one point: how do I give the server itself its management IP address on a tagged VLAN?

This is a server with a single external NIC. My current netplan config looks like this (with a whole bunch of vlans and bridges elided):

network:
  version: 2
  ethernets:
    enp1s0:
      wakeonlan: true
      dhcp4: false
      accept-ra: false
      link-local: []
  vlans:
    #...
    vlan254:
      id: 254
      link: enp1s0
      accept-ra: false
      link-local: []
    vlan255:
      id: 255
      link: enp1s0
      accept-ra: false
      link-local: []
  bridges:
    #...
    br254:
      macaddress: aa:bb:cc:dd:ee:ff
      interfaces: [vlan254]
      parameters:
        stp: false
        forward-delay: 0
      dhcp4: false
      accept-ra: false
      link-local: []
    br255:
      # Use enp1s0's MAC address, see https://bugs.launchpad.net/netplan/+bug/1782221
      macaddress: aa:bb:cc:dd:ee:ff
      interfaces: [vlan255]
      parameters:
        stp: false
        forward-delay: 0
      dhcp4: false
      accept-ra: false
      addresses: [10.12.255.13/24]
      gateway4: 10.12.255.1
      nameservers:
        addresses: [10.12.255.1]
        search: [home.example.net]

You can see the management IP is on br255, which bridges to vlan255, which is tagged on the connection to the upstream router via enp1s0.

I was hoping to replace it with a single bridge that does VLAN filtering, something like this:

network:
  version: 2
  ethernets:
    enp1s0:
      wakeonlan: true
      dhcp4: false
      accept-ra: false
      link-local: []
  bridges:
    br0:
      macaddress: aa:bb:cc:dd:ee:ff
      interfaces: [enp1s0]
      parameters:
        stp: false
        forward-delay: 0
      dhcp4: false
      accept-ra: false
      link-local: []

==> /etc/systemd/network/10-netplan-br0.netdev <==
[NetDev]
Name=br0
Kind=bridge

[Bridge]
MulticastSnooping=false
VLANFiltering=true

==> /etc/systemd/network/10-netplan-enp1s0-vlan.network <==
[Match]
Name=enp1s0
[BridgeVLAN]
VLAN=249-255

But how do I configure an IP address on “br0.255”, i.e. management traffic is tagged 255? (And some incus containers will be using vlan 255 too).

If necessary, I could convert the upstream port to use native frames for vlan 255, then I guess I could put the management IP address directly on br0 - but I’d like to avoid that if possible.

Yeah, your best bet is to make the management VLAN the untagged VLAN.

The alternative would be to create a dummy ethernet device and have that be put into the bridge with the appropriate VLAN filter. But I’m not sure that this is possible with netplan…

After spending some time playing with systemd-networkd in containers, I solved this.

The answer is that you can assign a PVID to the bridge itself:

==> /etc/netplan/01-netcfg.yaml <==
network:
  version: 2
  ethernets:
    enp1s0:
      wakeonlan: true
      dhcp4: false
      accept-ra: false
      link-local: []
  bridges:
    br0:
      # See https://bugs.launchpad.net/netplan/+bug/1782221
      macaddress: aa:bb:cc:dd:ee:ff
      interfaces: [enp1s0]
      parameters:
        stp: false
        forward-delay: 0
      dhcp4: false
      accept-ra: false
      addresses: [10.12.255.13/24, "2001:db8::13/64"]
      routes:
        - to: default
          via: 10.12.255.1
        - to: default
          via: "2001:db8::1"
      nameservers:
        addresses: [10.12.255.1]
        search: [home.example.net]

==> /etc/systemd/network/10-netplan-br0.netdev.d/vlan.conf <==
[Bridge]
MulticastSnooping=false
VLANFiltering=true

==> /etc/systemd/network/10-netplan-br0.network.d/vlan.conf <==
[BridgeVLAN]
VLAN=255
PVID=255
EgressUntagged=255

==> /etc/systemd/network/10-netplan-enp1s0.network.d/vlan.conf <==
[BridgeVLAN]
VLAN=2-3
VLAN=248-256

Then all I had to do was to change the profiles for the containers, e.g. the profile “br255” changed from:

devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br255
    type: nic

to:

devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
    vlan: "255"

And all is working!

Ah, nice trick!

Unfortunate to see all those options missing on the netplan side, it’d be really nice not to have to mix networkd and netplan syntax…