Secure public Multi-IP instances?

Hello everyone,

I’m sort of new to Incus (couple of weeks of experience) and I have a networking issue. My Incus node is a RockyLinux 9.4 running Incus 6.6 and it also has openvswitch3.3 installed, but usually not running. Incus is not clustered.

My primary interface is br0 (unmanaged by Incus) and typically I just chuck Instances that need to be publicly available onto br0:

devices:
  eth0:
    ipv4.address: 2XX.XXX.XXX.180 # public IP
    ipv6.address: 2001:db8::0d9a      # dummy Ipv6 to be able to use 'mac_filtering'
    nictype: bridged
    parent: br0
    security.ipv4_filtering: "true"
    security.ipv6_filtering: "true"
    security.mac_filtering: "true"
    type: nic

I’m seeding the ARP table (via a self written daemon) after instances have been started. Therefore even instances on IPs outside of the network range assigned to br0 are reachable.

My question: How can I safely run instances that have more than one IPv4 and/or more than one IPv6? The option ‘ipv4.address’ only takes a single IP.

Multiple network interfaces in the Instance config? That makes routing a bit “interesting” to say the least. But if someone has a working config for that? I’d love to try it if you could share it.

I also tried MACVLAN and OVS after extensively studying the documentation, watching related Youtube-videos, checking the forum here and some blogs. But there always seems to be something missing or some need to know that’s being taken for granted. I tortured ChatGPT-4o for two weeks straight on the issue and while I learned a lot? I’ve run out of ideas and I thought I’d ask here:

Does anyone have a guide or pointers how to expose Incus Instances with multiple IPv4/IPv6 publicly via an unmanaged br0? In any way? Static IP assignment via cloud-init is fine. Extra routing on the Incus node? Manually seeding the ARP table? All totally acceptable. OVS? MACVLAN? If so, then I’d really appreciate a few pointers on how to do this.

What I tried so far:

Scenario: Inhouse Incus node on private 192.168.1.9/24 network. Intent: Create Instance with IPs 192.168.1.34-36 in a NetworkManager created VLAN and attach that to Instance:

nmcli connection add type vlan con-name vlan200 ifname br0.200 dev br0 id 200

nmcli connection modify vlan2 ipv4.method manual \
ipv4.addresses "192.168.1.34/32,192.168.1.35/32,192.168.1.36/32" \
ipv4.dns "8.8.8.8,8.8.4.4" \
ipv4.never-default true

nmcli connection modify vlan2 +ipv4.routes "192.168.1.34/32,192.168.1.35/32,192.168.1.36/32"

nmcli connection up vlan2

The idea is to then connect it to the Incus instance:

incus config device add <instance> eth0 nic \
    nictype=bridged \
    parent=br0.2 \
    name=eth0
    vlan="2" #? Not sure if that's needed here. Tried with and w/o

If this worked, it would be ideal as NetworkManager would handle the (static) IP assignments and routes on the node and set up the VLAN. However: I struggle to connect this to the instance:

Incus then sees br0.2 as “unmanaged” and starting the instance Incus complains: “Failed to connect to OVS”.

When I start “openvswitch” in unconfigured state, then I get this:

[root@home ~]# incus start second
Error: Failed to start device "eth0": object not found
Try `incus info --show-log second` for more info

[root@home ~]# incus info --show-log second
Name: second
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2024/10/18 21:33 -05
Last Used: 2024/10/20 14:33 -05

Log:

So far I failed to come up with a working OVS setup that would make ends meet. I’d appreciate any input or suggestions that anyone might have.

Thank you!

For plain bridge, we may be able to make ipv4.routes.external and ipv6.routes.external to mean something in this scenario, though it may make for slightly unpleasant iptables/ebtables rules.