Hi there!
I am fairly new to incus and I’ve experimented now quite a bit with it. Now I want to have some freedback about my setup. My ultimate goal was to have incus and the incus UI accessible any time from internet, similar to my already existing other services like Teamspeak3, etc, which i achived. What is still pending is the HA-Proxy VM inside incus, so i can access the VMs also via DNS from the internet.
Background information: This is a home setup. My ISP rotates my public IPv4 and IPv6 adresses randomly or on a router reboot.
Here is a high level ingress view of my setup:
The cluster is made up of three Beelink EQR6 machines (6800U, 32GB RAM, single 1TB NVME SSD). The docker host is a selfmade machine of old parts (i7 4790k, 16GB RAM, 512GB SATA SSD).
The Incus cluster is installed with incus-deploy. And if I want to start over, I just reboot - the machines boot via PXE and install a fresh Ubuntu 24.04.
Since I use the integrated ACME client of Incus, I needed to put a TCP Proxy (HAProxy in this case) upfront my existing traefik. This way, Incus can request its own certificates and traefik takes care of other services. HAProxy can also round-robin WAN connections to any of the nodes so any machine can go down. I also spun up dex which acts as a simple lightweight OIDC provider for incus.
Here is my technical configuration
incus-deploy inventory
all:
vars:
incus_name: "baremetal"
incus_release: "stable"
ovn_name: "baremetal"
ovn_az_name: "zone1"
ovn_release: "distro"
children:
baremetal:
vars:
ansible_user: root
ansible_become: false
incus_init:
config:
"core.https_trusted_proxy": "192.168.20.20,192.168.30.1"
"oidc.audience": "incus-ui"
"oidc.claim": "email"
"oidc.client.id": "incus-ui"
"oidc.issuer": "https://dex.myrealdomain.de/dex"
"oidc.scopes": "openid email profile"
"acme.domain": "incus.myrealdomain.de"
"acme.email": "incus@myrealdomain.de"
network:
LOCAL:
type: macvlan
local_config:
parent: eno1
description: Directly attach to host networking
UPLINK:
type: physical
config:
ipv4.gateway: "192.168.30.1/24"
ipv6.gateway: "fd30:30::1/64"
ipv4.ovn.ranges: "192.168.30.20-192.168.30.99"
dns.nameservers: "192.168.30.1"
local_config:
parent: enp2s0
description: Physical network for OVN routers
default:
type: ovn
config:
network: UPLINK
default: true
description: Initial OVN network
storage:
local:
driver: btrfs
description: Local storage pool
incus_roles:
- cluster
- ui
ovn_roles:
- host
hosts:
node1:
ansible_host: 192.168.30.11
ovn_roles:
- central
- host
node2:
ansible_host: 192.168.30.12
ovn_roles:
- central
- host
node3:
ansible_host: 192.168.30.13
ovn_roles:
- central
- host
I picked an IPv6 address of the fd30 range to do NAT66, since I was unable to forward a public /64 network which is resistant to my ISPs address rotation.
The DNS records are updated with a cron job every hour calling terraform. Since I use HAProxy for loadbalancing anyhow, it is enough to use the docker host public IPv6 address, I do not need to public IPs of the individual nodes.
The incus nodes reside in VLAN30, my docker host has two NICs (primary NIC and route is in VLAN20, second NIC is in VLAN30), thus the two trusted proxies.
For the OVN ranges, I picked something that is in the same VLAN 30 as the nodes, not sure if this is correct or not. Also I made sure the range stops before the DHCP range for that VLAN, also not sure if needed, but at least if I spawn containers or VMs with the “LOCAL” network, the instances get their IP from the DHCP server of the router. I even added a DHCP script there to make them accessible by their hostname via dns, e.g. debian1.incus.lan ![]()
For the moment I am only using btrfs because the SSDs inside the nodes are on lower side of quality. Still, I do not expect high workloads in the future on them as I plan to use the cluster only for testing purposes. I will mostlikely though try to add linstor to experiment with redundancy if a node goes down. Still have to figure out how to integrate this into the existing LVM, as it is only a single physical disk. Ceph is definitely not going on this machines, I want to keep the SSDs alive as long as possible.
So please go ahead, tell me what I totally made wrong, could improve, what you recommend (e.g. a second disk in the machines is also a valid recommendation
). Also feel free to ask questions about further details (configs of HAProxy, dex, whatever). Hope you enjoyed reading it.

