Siloing host level VRF to project level network bridge

I’ve been too stubborn to ask for help but I’m literally pulling my hair out at this point… so any help would be greatly appreciated here:

I was excited by one of the later releases offering support for VRF’s, but soon found a lot of limitations in the routed NIC approach due to the need to specify static IP’s per instance.

I have since put together a script that I’ve tried 50+ variations of to no avail (posted below). This script automatically creates a unique VRF on the host, a new incus project and network bridge, and then attempts to tie that bridge to the host specific VRF with a VETH pair.

Ideal outcome:

I’m hoping for a solution that ultimately allows me to create multiple incus projects that are tied to isolated VRF’s on the host - and for newly created instances in those projects to behave exactly how the default incus project would (containers get auto-assigned IPv4’s, have internet access, and DNS capabilities).

Please let me know if you see any glaring issues in the script below. Earlier versions would give me errors about networks not being found for the project even when they showed up in the config for those projects.

#!/bin/bash

set -e

=== Inputs ===

PROJECT=“$1”

TABLE_ID=“$2”

EXT_IFACE=“eth0”

=== Derived Vars ===

BRIDGE=“br-${PROJECT}”

VRF_DEV=“vrf-${PROJECT:0:8}”

VETH_VRF=“v${TABLE_ID}-vrf”

VETH_HOST=“v${TABLE_ID}-host”

OCTET1=$((TABLE_ID % 223 + 10))

OCTET2=$((TABLE_ID % 254 + 1))

BRIDGE_SUBNET=“10.${OCTET1}.${OCTET2}.1/24”

BRIDGE_NET=“10.${OCTET1}.${OCTET2}.0/24”

if [ -z “$PROJECT” ] || [ -z “$TABLE_ID” ]; then

echo “Usage: $0 ”

exit 1

fi

echo “[+] Creating Incus project: $PROJECT”

incus project create “$PROJECT” 2>/dev/null || true

echo “[+] Configuring project network restrictions”

incus project set “$PROJECT” restricted.networks.access “$BRIDGE”

incus project set “$PROJECT” restricted.devices.nic managed

incus project set “$PROJECT” restricted true

echo “[+] Switching to project: $PROJECT”

incus project switch “$PROJECT”

echo “[+] Creating Incus bridge: $BRIDGE”

incus network create “$BRIDGE” \

ipv4.address=“$BRIDGE_SUBNET” \

ipv4.nat=false \

ipv6.address=none 2>/dev/null || true

echo “[+] Creating Linux VRF device: $VRF_DEV (table $TABLE_ID)”

ip link show “$VRF_DEV” &>/dev/null || ip link add “$VRF_DEV” type vrf table “$TABLE_ID”

ip link set “$VRF_DEV” up

echo “[+] Binding bridge $BRIDGE to VRF $VRF_DEV”

ip link set “$BRIDGE” master “$VRF_DEV” || true

echo “[+] Creating veth pair: $VETH_VRF ↔ $VETH_HOST”

ip link show “$VETH_VRF” &>/dev/null || ip link add “$VETH_VRF” type veth peer name “$VETH_HOST”

echo “[+] Configuring VRF interface”

ip link set “$VETH_VRF” master “$VRF_DEV” || true

ip link set “$VETH_VRF” up

ip addr add “10.${OCTET1}.254.1/30” dev “$VETH_VRF” 2>/dev/null || true

echo “[+] Configuring host interface”

ip link set “$VETH_HOST” up

ip addr add “10.${OCTET1}.254.2/30” dev “$VETH_HOST” 2>/dev/null || true

echo “[+] Setting default route in VRF table $TABLE_ID”

ip route add default via “10.${OCTET1}.254.2” dev “$VETH_VRF” table “$TABLE_ID” 2>/dev/null || true

echo “[+] Setting up NAT for outbound traffic”

iptables -t nat -C POSTROUTING -s “$BRIDGE_NET” -o “$EXT_IFACE” -j MASQUERADE 2>/dev/null || \

iptables -t nat -A POSTROUTING -s “$BRIDGE_NET” -o “$EXT_IFACE” -j MASQUERADE

echo “[+] Setting up connection tracking for VRF”

iptables -t raw -C PREROUTING -i “$VETH_HOST” -j CT --zone “$TABLE_ID” 2>/dev/null || \

iptables -t raw -A PREROUTING -i “$VETH_HOST” -j CT --zone “$TABLE_ID”

echo “[+] Attaching $BRIDGE to default profile”

incus profile device add default eth0 nic network=“$BRIDGE”

echo “[✓] Project ‘$PROJECT’ is set up with bridge ‘$BRIDGE’, VRF ‘$VRF_DEV’, and subnet $BRIDGE_NET”

Also… this is the output from running the above right now:

virginia:~# ./vrf2.sh plswrk 12345

[+] Creating Incus project: plswrk

Project plswrk created

[+] Configuring project network restrictions

[+] Switching to project: plswrk

[+] Creating Incus bridge: br-plswrk

Network br-plswrk created

[+] Creating Linux VRF device: vrf-plswrk (table 12345)

[+] Binding bridge br-plswrk to VRF vrf-plswrk

[+] Creating veth pair: v12345-vrf ↔ v12345-host

[+] Configuring VRF interface

[+] Configuring host interface

[+] Setting default route in VRF table 12345

[+] Setting up NAT for outbound traffic

[+] Setting up connection tracking for VRF

[+] Attaching br-plswrk to default profile

Device eth0 added to default

[✓] Project ‘plswrk’ is set up with bridge ‘br-plswrk’, VRF ‘vrf-plswrk’, and subnet 10.90.154.0/24

…The issue from here seems to be with associating the bridge with the newly created project (plswrk). Everything I’ve tried so far to do that results in new containers launched in that project to throw eth0 related device errors.

So, maybe instead of having to decode the whole script, one could could look at the output and clarify what commands would need to be ran to have the bridge properly tied up to the project.

Thanks in advance.