How to set the instance MTU for Incus with OVN

Now this sounds like it should be really easy, but … I need to be able to set the MTU of an instance independently of the Network MTU. Does anyone have a way of doing this?

I have a situation where my instances are running on an OVN network, which is then using OVN-IC to connect to a remote OVN cluster, over a VPN … so I have multiple layers of MTU and of course OVS/OVN doesn’t do fragmentation properly and silently drops anything oversized - so the MTU has to work. Whereas it should be possible to calculate the correct MTU and assign it to the network, this doesn’t seem to fly. The instance MTU seems to need to be consistently less than the network MTU to work, if I lower the network MTU to 1200, the instance MTU then needs to be around 1100.

My solution is to use the default network MTU of 1442, then set the instance MTU at 1200 with;

ifconfig eth0 mtu 1200  # in the instance

While this may not be 100% efficient, it does appear to be 100% reliable. My issue is how to set the MTU automatically after the instance is started. If I try to add MTU to the profile it complains that “mtu” is not allowed with devices of type “network”.

# incus profile show default
config: {}
description: Default Incus profile
devices:
  eth0:
    network: private
    type: nic
    # I want to put "mtu: 1200" here ...
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/instances/npm
- /1.0/instances/demo
project: default

Unfortunately I’m using a mix of containers including OCI so cloud-init isn’t going to work because it’s not always available. (I don’t always have “ip” or “ifconfig” for that matter) I was hoping it could be set via DHCP but I can’t seem to find any reference to how to do it …

Anybody know the answer or have anything I could try?
(does anyone know why having the instance MTU == the OVN network MTU doesn’t work?)

For me OVN-IC seems to handle PMTU:

Here is with just accessing the WAN (my WAN is at MTU 1440 for $reasons):

root@ic-test:~# ping -M do -s 1413 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 1413(1441) bytes of data.
ping: local error: message too long, mtu=1440
^C
--- 1.1.1.1 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

root@ic-test:~# ping -M do -s 1412 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 1412(1440) bytes of data.
1420 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=11.8 ms
^C
--- 1.1.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 11.810/11.810/11.810/0.000 ms
root@ic-test:~# 

And then accessing something over OVN-IC over that WAN:

root@ic-test:~# ping 10.170.69.2 -M do -s 1334
PING 10.170.69.2 (10.170.69.2) 1334(1362) bytes of data.
1342 bytes from 10.170.69.2: icmp_seq=1 ttl=62 time=4.80 ms
^C
--- 10.170.69.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 4.798/4.798/4.798/0.000 ms
root@ic-test:~# ping 10.170.69.2 -M do -s 1335
PING 10.170.69.2 (10.170.69.2) 1335(1363) bytes of data.
ping: local error: message too long, mtu=1362
^C
--- 10.170.69.2 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

Ok, so on my setup using I get;

# ping -c1 -M do -s 1413 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 1413(1441) bytes of data.
ping: local error: message too long, mtu=1440

# ping -c1 -M do -s 1412 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 1412(1440) bytes of data.
^C (hangs)
--- 1.1.1.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

# ping -c1 -M do -s 1300 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 1300(1328) bytes of data.
1308 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=11.8 ms

So it looks like I have some sort of local MTU issue. The largest packet that will produce a response is 1350 (1358). If try 1359 while tcpdumping “br1” on the host, nothing hits the “br1” interface, so it’s not making it’s way through the instance / ovn stack … given the instance MTU is 1440, it would imply there is something in-between that’s lower … (?)

I’m thinking (?) something in the OVN setup, but I’m not sure what I’,m looking for …

instance

# incus config show demo
architecture: aarch64
config:
  image.architecture: arm64
  image.description: Debian bookworm arm64 (20250507_06:39)
  image.os: Debian
  image.release: bookworm
  image.serial: "20250507_06:39"
  image.type: squashfs
  image.variant: cloud
  volatile.base_image: c04e4334308f29b478dec0196c19694097eaaee221921cb100c39c5832d04bdd
  volatile.cloud-init.instance-id: 393277cf-0e13-44aa-a3c0-ebb2a3dc9c61
  volatile.eth0.host_name: veth55628552
  volatile.eth0.hwaddr: 10:66:6a:44:24:f7
  volatile.eth0.last_state.ip_addresses: 10.4.0.7,fd42:7c4a:d986:bae0:1266:6aff:fe44:24f7
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: 31080aff-a8bb-42bd-aa31-4efa59b4bd90
  volatile.uuid.generation: 31080aff-a8bb-42bd-aa31-4efa59b4bd90
devices:
  eth0:
    ipv4.address: 10.4.0.7
    network: private
    type: nic
ephemeral: false
profiles:
- default
stateful: false
description: Demo Instance

Network

# incus network show private
config:
  bridge.mtu: "1442"
  ipv4.address: 10.4.0.1/22
  ipv4.dhcp.routes: 0.0.0.0/0,10.4.0.1
  ipv4.nat: "true"
  ipv6.address: fd42:7c4a:d986:bae0::1/64
  ipv6.nat: "true"
  network: UPLINK
  security.acls: protected
  volatile.network.ipv4.address: 192.168.1.16
description: ""
name: private
type: ovn
used_by:
- /1.0/instances/demo
- /1.0/instances/matrix-linux
- /1.0/instances/pypi
- /1.0/instances/taiga
- /1.0/instances/uptime-kuma
- /1.0/instances/vcheck
- /1.0/instances/verdaccio
- /1.0/instances/zerodocs
- /1.0/profiles/default
managed: true
status: Created
locations:
- core
- grok
- worf
project: default

UPLINK

# incus network show UPLINK
config:
  dns.nameservers: 8.8.8.8,1.1.1.1
  ipv4.gateway: 192.168.1.254/24
  ipv4.ovn.ranges: 192.168.1.16-192.168.1.63
  volatile.last_state.created: "false"
description: ""
name: UPLINK
type: physical
used_by:
- /1.0/networks/private
managed: true
status: Created
locations:
- core
- grok
- worf
project: default

Uplink is sat on “br1”

br1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.2.1  netmask 255.255.255.0  broadcast 192.168.2.255
        inet6 fe80::6e6e:7ff:fe16:a2f7  prefixlen 64  scopeid 0x20<link>
        inet6 2a00:23c7:3c21:cc01:6e6e:7ff:fe16:a2f7  prefixlen 64  scopeid 0x0<global>
        ether 6c:6e:07:16:a2:f7  txqueuelen 1000  (Ethernet)
        RX packets 469308  bytes 42050771 (40.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 12713483  bytes 3138466622 (2.9 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Mm, so by trial and error I seem to have three settings for MTU that look to be significant.

For cluster → outgoing traffic.
Local instances, outgoing MTU is 1376.
Remote instances (Hetzner, other side of IC) is 1442.
For inter-cluster traffic, MTU is 1262.

By setting the MTU for the OVN network “private” to 1376, outgoing traffic is fine. If I could set the MTU on the private network on the Hetzner end (other side of the IC) to 1262, I’m pretty sure it would all work … but INCUS seems to limit the minimum MTU to 1280 … so I seem to be stuck again …

(I’ve just revisited all my instances, setting the MTU to 1262 rather than 1200 works for all)

Is there a reason for the 1280 MTU limitation, 1262 does seem to work (and fix the issue) when set at an instance level?

Local Instance

# ifconfig eth0 | grep mtu
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
# ping -c1 1.1.1.1 -s 1349 -M do
PING 1.1.1.1 (1.1.1.1) 1349(1377) bytes of data.
ping: local error: message too long, mtu=1376
# ping -c1 1.1.1.1 -s 1348 -M do
PING 1.1.1.1 (1.1.1.1) 1348(1376) bytes of data.
1356 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=11.0 ms
# curl https://linux.uk >/dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  174k    0  174k    0     0   276k      0 --:--:-- --:--:-- --:--:--  275k
#
# Setting a higher MTU results in packets being swallowed silently
#

Remote Instance

# ifconfig eth0|grep mtu
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1442
# ping -c1 -M do 1.1.1.1 -s 1414
PING 1.1.1.1 (1.1.1.1) 1414(1442) bytes of data.
1422 bytes from 1.1.1.1: icmp_seq=1 ttl=54 time=15.2 ms
# ping -c1 -M do 1.1.1.1 -s 1415
PING 1.1.1.1 (1.1.1.1) 1415(1443) bytes of data.
ping: local error: message too long, mtu=1442
#
# Setting a higher MTU results in packets being swallowed silently
#

Same process from container to container yields 1262. With the MTU set to 1262 on either container, SSH between the two addresses works fine. At 1263 it just hangs.

At this point I’m not sure “why” this end runs at 1376 when the far end runs at 1442. The only difference I can see is that this end if behind a NAT router, but I don’t see how that could add anything to the packet header in terms of size or encapsulation.

1280 is the minimum MTU allowed for a network to support IPv6 so you should really never go below that or things may start to subtly break due to a variety of software and providers assuming that this is the de-facto minimum MTU on the internet.

The first thing I’d verify is that communication between hosts on each side of the IC has adequate PMTU handling.

So basically ping from host A on site A to host A on site B with -M do and confirm that you can get right up to the MTU for that link and that exceeding it gets you a “please-fragment” back.

If that’s the case, then I’d normally expect OVN to get that itself when it attempts to send a packet that’s too large over the IC and then issue a please-fragment of its own so the tunnelled traffic is forced to go with a lower MTU.

Mmm. Looks like I’ve done something nasty as the far end is no longer issuing IP addresses to instances. Although it seems like it’s pretty much working, I seem to have broken something fundamental.

Nah. I can no longer get any IP’s issued at the far end. All I was changing was MTU’s. Don’t see any logging. Lost.

Ok, so for some reason, dnsmasq is no longer firing up on the ovn network. I created another bridge, which seems to work Ok. Instances get issued IP’s happily from that … but the ovn network just seems to have stopped actually firing up a dnsmasq process.

There is no dnsmasq process for OVN networks. OVN northd handles the DHCP bits through OVN flow rules.

Ok, that helps a lot, looking at the northd logs;

INFO|Assigned dynamic IPv4 address '10.103.0.2' to port 'incus-net9-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1'
INFO|Assigned dynamic IPv6 address 'fd42:1d04:2037:5682:1266:6aff:fec4:1aea' to port 'incus-net9-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1'
INFO|Assigned dynamic IPv6 address 'fd42:1d04:2037:5682:1266:6aff:fec4:1aea' to port 'incus-net9-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1'
WARN|6fe13295-4a28-4bb2-ae63-063de5ede8d5: Duplicate IP set: 10.103.0.2

At first glance this would “appear” to be the issue … again first thought, something is out of sync, but I’ve rebooted many times now … I’m guessing northd must have a leases file somewhere that’s unhappy … (?)

Hmm. Looking at the ovn controller log;

INFO|incus-net17-instance-2864624a-b097-4c59-86a5-a8d10f645aa0-eth-1: Claiming 
10:66:6a:6c:04:e7 10.102.0.2 fd42:dbe2:feff:2249:1266:6aff:fe6c:4e7

Which looks Ok, then;

# incus config show new
architecture: aarch64
config:
  image.architecture: arm64
  image.description: Debian bookworm arm64 (20250524_05:29)
  image.os: Debian
  image.release: bookworm
  image.serial: "20250524_05:29"
  image.type: squashfs
  image.variant: cloud
  volatile.base_image: 129fc4635d66c1515766e57fc8dfaf9be74b0ae8fe1e80c34cbe450d4822e1f7
  volatile.cloud-init.instance-id: ac5a1323-c601-4637-8747-1f71817df329
  volatile.eth-1.host_name: veth6641320c
  volatile.eth-1.hwaddr: 10:66:6a:6c:04:e7
  volatile.eth-1.last_state.ip_addresses: 10.102.0.2,fd42:dbe2:feff:2249:1266:6aff:fe6c:4e7
  volatile.eth-1.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: 2864624a-b097-4c59-86a5-a8d10f645aa0
  volatile.uuid.generation: 2864624a-b097-4c59-86a5-a8d10f645aa0
devices:
  eth-1:
    network: private
    type: nic
ephemeral: false
profiles:
- default
stateful: false
description: New

Which look Ok …
So it looks like the IPV6 assignment is working, but somehow it’s losing the IPV4 assignment.

# incus ls new
+------+---------+------+-----------------------------------------------+-----------+-----------+
| NAME |  STATE  | IPV4 |                     IPV6                      |   TYPE    | SNAPSHOTS |
+------+---------+------+-----------------------------------------------+-----------+-----------+
| new  | RUNNING |      | fd42:dbe2:feff:2249:1266:6aff:fe6c:4e7 (eth0) | CONTAINER | 0         |
+------+---------+------+-----------------------------------------------+-----------+-----------+

Feels like there’s something subtle going on I’m not seeing. I previously set up static addresses for some instances, but this instance is dynamic, and I’ve changed the range for “private” to a new class-C so’s to void any potential historical conflict.

# incus network show private
config:
  bridge.mtu: "1442"
  dns.nameservers: 8.8.8.8
  ipv4.address: 10.102.0.16/24
  ipv4.dhcp.routes: 0.0.0.0/0,10.103.0.1
  ipv4.nat: "true"
  ipv6.address: fd42:dbe2:feff:2249::1/64
  ipv6.nat: "true"
  network: UPLINK
  volatile.network.ipv4.address: 10.0.0.240
description: ""
name: private
type: ovn
used_by:
- /1.0/instances/demo
- /1.0/instances/new
managed: true
status: Created
locations:
- none
project: default

So I’ve disabled IPV6 for the moment to make it clearer

# incus start new

Ovn Logs;

==> /var/log/ovn/ovn-northd.log <==
Assigned dynamic IPv4 address '10.103.0.2' to port 'incus-net17-instance-2864624a-b097-4c59-86a5-a8d10f645aa0-eth-1'

==> /var/log/ovn/ovn-controller.log <==
Claiming lport incus-net17-instance-2864624a-b097-4c59-86a5-a8d10f645aa0-eth-1 for this chassis.
incus-net17-instance-2864624a-b097-4c59-86a5-a8d10f645aa0-eth-1: Claiming 10:66:6a:6c:04:e7 dynamic
Setting lport incus-net17-instance-2864624a-b097-4c59-86a5-a8d10f645aa0-eth-1 ovn-installed in OVS
Setting lport incus-net17-instance-2864624a-b097-4c59-86a5-a8d10f645aa0-eth-1 up in Southbound
# incus ls new
+------+---------+------+------+-----------+-----------+
| NAME |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+------+---------+------+------+-----------+-----------+
| new  | RUNNING |      |      | CONTAINER | 0         |
+------+---------+------+------+-----------+-----------+

So OVN appears to be issuing the address Ok (10.103.0.2) but it doesn’t appear to be making it’s way to incus …

# From northd show ...
#
switch ebfc72f5-ac5e-4486-bac7-62ac4a8033da (incus-net17-ls-int)
    port incus-net17-ls-int-lsp-router
        type: router
        router-port: incus-net17-lr-lrp-int
    port incus-net17-instance-2864624a-b097-4c59-86a5-a8d10f645aa0-eth-1
        addresses: ["10:66:6a:6c:04:e7 10.103.0.2"]

Switch seems to be getting the IP …
If I set 10.103.0.2 manually in the instance, I can’t ping 10.103.0.1, which I would expect to be able to do, so maybe whatever is setting up the network is failing?

Ok, this is a little worrying because it looks like it’s “not me” … I nuked the remote node’s OVN/OVS setup and redeployed. It’s all automated and reproducible so there is limited scope for human error. After the remote deploy it worked immediately … logging from ovn is a little different.

In particular, OVN is now doing a DHCPOFFER/ACK, which is wasn’t before … so it’s now looking like something made it’s way into the OVN config that was breaking it’s DHCP. Not sure why it’s complaining about a duplicate IP, or logging lots of duplicate messages re; IPV6, but it’s now looking Ok and pings over the IC.

I suspect it was something to do with trying to set static IP’s up on instances … which I was trying to do so I could proxy bind on the host … will revert back to playing with the MTU for now … but something tells me this is going to happen again …

==> /var/log/ovn/ovn-northd.log <==
INFO|Assigned dynamic IPv4 address '10.103.0.2' to port 'incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1'
INFO|Assigned dynamic IPv6 address 'fd42:52a2:4c:32c9:1266:6aff:fe32:921b' to port 'incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1'
INFO|Assigned dynamic IPv6 address 'fd42:52a2:4c:32c9:1266:6aff:fe32:921b' to port 'incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1'
WARN|788e46b6-4b63-41d5-b6aa-2cad61040747: Duplicate IP set: 10.103.0.2
INFO|Assigned dynamic IPv6 address 'fd42:52a2:4c:32c9:1266:6aff:fe32:921b' to port 'incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1'
INFO|Assigned dynamic IPv6 address 'fd42:52a2:4c:32c9:1266:6aff:fe32:921b' to port 'incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1'
INFO|Assigned dynamic IPv6 address 'fd42:52a2:4c:32c9:1266:6aff:fe32:921b' to port 'incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1'

==> /var/log/ovn/ovn-controller.log <==
INFO|Claiming lport incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1 for this chassis.
INFO|incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1: Claiming 10:66:6a:32:92:1b dynamic

==> /var/log/ovn/ovn-northd.log <==
INFO|Assigned dynamic IPv6 address 'fd42:52a2:4c:32c9:1266:6aff:fe32:921b' to port 'incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1'

==> /var/log/ovn/ovn-controller.log <==
INFO|Setting lport incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1 ovn-installed in OVS
INFO|Setting lport incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1 up in Southbound

==> /var/log/ovn/ovn-northd.log <==
INFO|Assigned dynamic IPv6 address 'fd42:52a2:4c:32c9:1266:6aff:fe32:921b' to port 'incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1'
INFO|Assigned dynamic IPv6 address 'fd42:52a2:4c:32c9:1266:6aff:fe32:921b' to port 'incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1'
INFO|Assigned dynamic IPv6 address 'fd42:52a2:4c:32c9:1266:6aff:fe32:921b' to port 'incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1'
INFO|Assigned dynamic IPv6 address 'fd42:52a2:4c:32c9:1266:6aff:fe32:921b' to port 'incus-net19-instance-387c60bc-239f-480f-9cb4-c9ddcfa078cd-eth-1'

==> /var/log/ovn/ovn-controller.log <==
INFO|DHCPOFFER 10:66:6a:32:92:1b 10.103.0.2
INFO|DHCPACK 10:66:6a:32:92:1b 10.103.0.2

So I seem to have solved the initial issue … many thanks for your help. It would seem that it’s an exercise in MTU tuning which isn’t something I’ve had to do before. My issue looks to be very specific to the VPN software I’m using. (“tinc”)

The fix in this instance is as follows;

  • ensure each tinc hosts entry includes
    PMTUDiscovery=yes
    ClampMSS=yes
  • set the OVN network MTU to be 1370

Edit

This solution looks like it was a little premature. While this initially worked, after a little time tinc recalculated it’s MTU and goes down to 1309, which broke everything … so whatever tinc is doing dynamically seems undesirable. As an alternative, this seems to work so far;

  • PMTUDiscovery=no
  • PMTU=1500
  • ClampMSS=yes

The actual number is a little over 1370, and differs by 1 depending on which direction you ping, so 1370 seems like reasonable common ground. From what I can see the 1370 is derived from the OVN IC encapsulation + the tinc VPN encapsulation.

If anyone’s interested, to get this far I’ve written a deployment script (Python) which I guess does a similar sort of thing to incus-deploy, but is probably rather more opinionated and designed specifically to cope with a cluster + edge node over IC, hence includes tinc deployment etc. Trying to do all the SSL and IC setup by hand proved to be an impossible task for me, let alone the number of times I’ve had to scratch and redeploy for all sort of reasons :slight_smile:

This is my example deployment model;

{
    "name": "live",
    "desc": "My Live Cluster",
    "pki": {
        "host": "pki",
        "cacert": "/var/lib/openvswitch/pki/switchca/cacert.pem",
        "cert": "/etc/ssl/pki/sc-cert.pem",
        "key": "/etc/ssl/pki/sc-privkey.pem"
    },
    "trunk": {
        "port": "660",
        "cidr": "192.168.234.0/24",
        "name": "uplink"      
    },
    "firewall": {
        "type": "firehol",
        "private": "br1",
        "public": "br0",
        "uplink": "uplink"
    },
    "nodes": {
        "grok": {
            "address": "192.168.2.4",
            "icaddress": "192.168.234.14",
            "bridge": "br1"
        },
        "core": {
            "address": "192.168.2.1",
            "icaddress": "192.168.234.10",
            "bridge": "br1"
        },
        "worf": {
            "address": "192.168.2.5",
            "icaddress": "192.168.234.15",
            "bridge": "br1"
        },
        "gw1": {
            "address": "10.0.0.2",
            "icaddress": "192.168.234.4",
            "core_ic": "False",
            "listen": "192.168.234.4",
            "bridge": "br1",
            "public": "(redacted public IP)"
        }        
    },
    "clusters": {
        "az_local": {
            "networks": [
                {
                    "name": "private",
                    "cidr": "10.4.0.1/22",
                    "mtu": "1370"
                }
            ],
            "range": "192.168.1.16-192.168.1.63",
            "gateway": "192.168.1.254/24",
            "nodes": ["core", "grok", "worf"]
        },
        "az_gw1": {
            "networks": [
                {
                    "name": "private",
                    "cidr": "10.103.0.1/24",
                    "mtu": "1370"
                }
            ],
            "range": "10.0.0.240-10.0.0.247",
            "gateway": "10.0.0.2/16",
            "nodes": ["gw1"],
            "nameserver": "8.8.8.8"
        }        
    },
    "interconnects": {
        "hetzner": {
            "src": "az_local",
            "dst": "az_gw1"
        }
    }
}

So assuming I have a working Incus cluster and a working edge cluster, and ssh access to both;

./cit.py --trunk hetzner

Will deploy a meshed VPN across all nodes (if needed), if connectivity is in-place, skip .

./cit.py --deploy hetzner

Will hopefully layer every all the required OVN and IC stuff on top, create the uplink, ovn network, set the mtu, and all the other details needed to make it work. Well, so far, “works on my machine” :wink: