Is there any way to build out Incus networking in the same way a network engineer builds out switches, switchports, VLANs, trunks - without automatic setup of DNSMASQ and NAT?
I’m setting up Incus on Debian 12 in a VMware vCenter virtual machine connected to a distributed port group (VLAN). I want to interconnect Incus containers through the distributed switch into an NSX underlay/overlay infrastructure and do not want to wrestle with NAT and DNSMASQ or any other service that Incus configures automatically.
Is there any way to build up Incus networking like one builds a Cisco infrastructure?
You can have an unmanaged bridge (OS created) with vlan_filtering enabled on it, at which point you can attach Incus instances do whatever VLANs you want.
Which will then get you a native VLAN of 1000 and tagged VLANs for ID 2000 and 2001 in the instance.
In this case, you do need to configure br0 on your system through your OS’ network management tool (systemd-network, NetworkManager, netplan, …) and need to ensure that the vlan_filtering flag is properly set and that the uplink device for your bridge (usually enpXs0 device) has all the correct tags set on it too.
Any plans to add a capability into the Incus CLI that can setup a managed bridge interconnect that trunks VLANs out through a host NIC? Such that Incus can create and control a switch that can interconnect Incus Containers to something similar to a Port Group (VLAN) on a VMware Distributed Switch, with the ability to trunk and aggregate uplinks through the host NICs?
VMware ESXi can setup multiple 10G NICs for a VM running an Incus host and such an Incus bridge could punt the VLAN frames into an Overlay network. To do all of this within the Incus CLI would be next level and would mean no messing with Linux networking (Cisco’s IOS CLI is, for the most part, very logical and intuitive - especially the contextual help. I’ve not come across any CLI help system that can match Cisco’s).
Continuing this thought, could Incus interface directly with OVS APIs to extend this management into an OVN infrastructure, at least as far as an OVS? And visa versa from the OVN side, where OVN network engineers could setup Incus networking from an OVS CLI?
One of the things very obvious about Linux networking (at least to me as a network centric admin) is that the network configuration is approached from the point of view of the host/access layer - looking from the host in towards the network. Reinforcement of that perspective seems to come from the fact that a host NIC and its attached bridge interface are not switchports and cannot be configured like a true switchport.
Maybe the answers will become clearer as I work with Incus and NSX and with OVN infrastructure. Cheers.
This would be a great feature, especially for those wanting to move away from VMware and keep there very speedy existing switches. Love the Private Cloud feature in Incus but becomes cumbersome when I just want to setup Kubernetes cluster VMs and MetalLB/BGP.
Update: stgraber’s solution works great. Just make sure to reboot after creating the systemd files.
I would like to convert my home server config to using a VLAN-aware bridge, but I’m stuck at one point: how do I give the server itself its management IP address on a tagged VLAN?
This is a server with a single external NIC. My current netplan config looks like this (with a whole bunch of vlans and bridges elided):
But how do I configure an IP address on “br0.255”, i.e. management traffic is tagged 255? (And some incus containers will be using vlan 255 too).
If necessary, I could convert the upstream port to use native frames for vlan 255, then I guess I could put the management IP address directly on br0 - but I’d like to avoid that if possible.
Yeah, your best bet is to make the management VLAN the untagged VLAN.
The alternative would be to create a dummy ethernet device and have that be put into the bridge with the appropriate VLAN filter. But I’m not sure that this is possible with netplan…