Incus cluster: create simple VXLAN managed bridge type network on dedicated LAN segment

Hello Incus community,

I work with an incus (6.0.3.4) cluster on debian 12. have 4 debian based hosts in the cluster and actually experimenting with the mangaged network features. I have connected all 4 hosts with a separate physical ethernet card for my tests, each and want to create a managed bridged incus cluster network with uplink to my dedicated physical network cards. Forum and docs I could find, did not help me, to set it up.

I saw, St. Graber posting that this all sould be possible to setup already in 2019, but I could not find any usable howto and further information on

  • host system prequisites
  • limitations
  • complete & working setup directives on cluster+members

For the beginning, I want to create a simple VNI=100 vxlan network in my cluster, that is running on my 4 dedicated nics, that therefor should be part of the cluster bridges, each on one host. I want to add this and similar networks with different vxlan ids in managed mode to my incus cluster and connect it to VMs/containers, that can talk on this vxlan network in an isolated manner across my cluster. Later it would be nice to add more vxlan segments, ideally to the same host bridge, if this is possible with incus.

I read information here

But it is not clear to me,

  1. what extra software prequisites I need for my vxlan network, if any… Do I need to create the vxlan aware bridge on every host and the vxlan uplink to the unicast vxlan network manually ? Or will incus do this all for me ? Do I need to have OVS/Open VSwitch deployed, or does the linux ip suite suffice ?
  2. What would be a a simple incus network setup commands example, in case my hosts have all eth1 as dedicated uplink network for my vxlan availabe ? I especially wonder, how to fill the bridged network parameters that reside unter the “tunnel” namespace. As I have a cluster, I cannot see, which of the bridged network paramters are member specific and which global… What, if uplink phys have not all the same name eth1 ? What other parameters are mandatory or required, to setup my “simplevxlan100” network ?

So any help or pointer to resources I did not find would be really appreciated :slight_smile:

I want to share my limited findings with my issue.

In my limited understanding, lxd and later incus have good management support for simple virtual network infrastructure like fan, macvlan, or simply bridged and physical networks. On the other side, there is some management support for ovn SDNs, that allow management of (a limited subset of) complex virutal networking infrastructure.

But there seems to be a some lack of management support for complexity inbetween. That means, if you want to stick with vxlan technology to virtualize your network segments without deploying a complex software stack like ovn, your chances are good, that you can’t manage it from inside of incus, when you utilize a cluster setup. (Standalone server is a different story)

Let’s take a look at the relevant networking setup parameters for a bridged cluster managed vxlan network… Example:

ipv4.address, tunnel.xxx.local, tunnel.xxx.remote, tunnel.xxx.interface …

These seem to be all global scope managed networking parameters. So this is not a problem, when using a standalone server, but think of a cluster: surely you want/need to provide local/remote tunnel IPs for your unicast network per member server basis, as it has to be adapted for each member server. Also keeping the tunnel interface var in global namespace makes (at least for me) no sense, as interfaces may vary amongst cluster servers. Setting the network server bridge IP addr. global with ipv4.address also actively prohibits use this feature in clusters, as in a shared virtual cluster network every member server needs its own separate IP addr and this global setting would give every member server the same IP addr. I guess, for the same reason, dhcp/dns management will not work with non-ovn virtual vxlan clusters. Another issue I had, was that using IP2 suite as default bridge driver will lead to managed bridges on the hosts, that will be created all with the same mac addr, so you have to manually change n-1 cluster server bridge macs to have a working network. As this “same IF definition → same MAC” is a IP2 “feature”, inucs would have to manage this to work around this problem in clusters.

Maybe I did not understand the scarce information provided with the networking variables documentation correctly, maybe Mr. Graber implicitly meant only non-cluster managed networks, when he posted in 2019, that unicast vxlan in multiple servers was possible with lxd toolset… But I could not figure out, how this could work, when tunnel.xxx.remote are defined at cluster global namespace.

I write this all, to help others to not loose their time with fiddling around with networking setups, that seem to be not consequently supported yet. Seems to me, that you are better off, to set up and manage these networks on OS level as “incus unmanaged devices” and simply connect your containers/vms with some veths to these network bridges. If you are willing to handle the addition of complexity, stick with ovn.

As side node: I managed to set up a multicast cluster vxlan network (tunnel.xxx.group=239.1.1.1), that due to its nature, does not require tunnel.remote and tunnel.local parameters. I therefor used aligned tunnel.interface device names accros the cluster. Multicast is not that efficient as unicast vxlan, but it is great to have this technology managed with inside the vm/cluster toolset.