Incus homelab guidance

Hi everyone,
I’m getting started with Incus and would love some guidance from experienced users to set up my home lab correctly.

My hardware:

  • AMD 16-core CPU

  • 64 GB RAM

  • 256 GB SSD (boot)

  • 4 TB NVMe

  • 4 TB HDD

  • Wi-Fi + 1 wired Ethernet adapter

Goals:

  • Host all Incus VMs and containers on the 4 TB NVMe, using ZFS as the storage backend.

  • Use the Incus bridge for most workloads.

  • For certain VMs, attach them directly to my home LAN through the physical NIC so they’re reachable from outside the Incus host.

  • Run a DNS/DHCP server and possibly a reverse proxy to centrally manage these Incus VMs.

I’ve read multiple blogs and documentation, but the recommendations differ quite a bit—especially around ZFS setup and how to properly bridge VMs to the external network. It’s all a bit confusing.

Looking for advice on:

  1. Best method to create and configure a ZFS storage pool on the 4 TB NVMe for Incus

  2. Recommended way to use the Incus bridge for most containers/VMs

  3. How to correctly bridge specific VMs to the physical NIC for direct LAN access

  4. Suggestions for running DNS/DHCP and a reverse proxy (e.g., OPNsense/VyOS/OpenWrt container or VM) within an Incus environment

Any best practices, examples, or step-by-step guidance would be greatly appreciated.

Thanks!

Your requirements are very similar to what I do, so I can share with you what I did.

My host is running Debian Trixie with the Zabbly repos for the kernel, ZFS, and Incus packages. Make sure you have ZFS installed and working before you install initialize Incus.

  1. Let Incus create the storage pool on the NVME drive during the initial initialization.
  2. Let Incus create a bridge for you during initial initialization.
  3. Create a bridge with NetworkManger that bridges your physical network card.
  4. Let your router do the DNS and DHCP tasks.

This setup works well for me. If I want something to resolve over the whole network I launch the instance using the bridge created in step 3. Otherwise, I use the bridge that Incus created during initialization. In this case these backend services are only exposed to the host and other instances on the same bridge.

You can create different profiles for the two bridges so that it is easier to launch the instances into the correct network.

It might also be a good idea to do 3 first and make sure you can reboot your host correctly before moving on. The host should be able to connect through the bridge to your router and get a DNS entry and IP address in the same CIDR as before the bridge was created.

1 Like

Thank you — that explanation really helped and clarified exactly what I was trying to understand.

This brings me to a follow-up question:

How do you handle DNS, DHCP, and reverse-proxying for VMs/containers that are connected to the default Incus bridge?

Do you run those services inside a dedicated VM/container on the Incus bridge, or do you rely on some other approach?

Incus provides DNS and DHCP for instances on the default bridge.

Once you have an instance running on this bridge you can use Incus to expose the instances by using Incus network forwards. I generally don’t do this, though. I usually use Caddy for all my reverse proxy needs. It takes care of TLS, SSO, and logging for me. For this to work I also run Step CA and Authelia. Most of these services are running on a different, always on, Incus host.

If you like you can add a Caddy server to both networks. I just put things that need to be exposed on the second bridge.

I find this system easier to manage than the alternatives. At one point in the past, I was also running my own DNS servers and had them connected to Incus. When I rebuild my setup for Trixie I abandoned that solution since it seemed like to much work.

2 Likes

Another option to consider is IncusOS, which reached stable a week ago. I have nothing but good things to say about it based on experimentation on similar hardware to you since then.

If you have a TPM and want to go all in on Incus, I’d recommend at least giving it a look.

If you do, setup wise I would recommend:

  1. Use the 256GB as your boot drive. The installer will set it up with a partitions needed for the base OS and secure boot.
  2. Provision the 4 TB NVMe as a seperate ZFS pool and use for any container/vm storage.
  3. Setup the 4 TB HDD as third pool for anything non-latency sensitive (custom volumes with media, backups etc). If when you have the interest / need (or lets be honest here, it’s a homelab so more likely want) you could also expand that out with a mirror HDD and potentially a SSD (preferably mirror) special vdev depending on what your workloads look like.

All of this can be done through the Incus CLI.

Network wise there’s some good pointers in this thread too about recommended ways to bridge to external networks.

2 Likes

i have different bridges for set of services, i would also recommend attaching ACL rules ( Network Isolation by Project on Single Server Incus Host ) so bridges can’t talk to each which happens by default. for requesting LAN IP directly, you can use macvlan network type sudo inucs network create macvlan --type=macvlan parent=<host-interface> and then use --network macvlan on any instance you create.

for reverse proxy, i have nginx on incus host for maximum perf and it then routes to any bridge i want. if you configure ACL, make sure to also allow ingress from the bridge’s gateway IP for host to communicate with the bridge….

1 Like

There is a caveat with macvlans. The host and instances will not be able to communicate with each other.

I’ve heard good things about Caddy, so I’ll definitely explore it. If it can handle DNS resolution, TLS, reverse proxying, and SSO, that would be a great option.

Regarding Incus’ native DNS and DHCP features, I wasn’t able to find clear documentation. Do you have any reference links or guides that explain how they work?

Slightly off-topic, but I’m currently using Ubuntu 24 as my Incus host — is there any particular advantage to running Debian Trixie instead of Ubuntu?

Thanks! I’ll definitely look into IncusOS. For now though, I’d prefer to stick with a traditional base OS like Ubuntu, Debian, or Rocky.

You can use Ubuntu if you like. I stopped using after Canonical took over LXD.

ubuntu is notorious to make bad changes all the time, i have been using arch as a base and it’s work good. yet no problems faced

I’m currently in the process of migrating my n-th iteration of my home lab/system to IncusOS, so please take all below with a grain of salt (I transitioned trough Ubuntu => Debian => NixOS and now finally planning IncusOS as host system, previously with a wild mixture of KVM/visrh, docker and lxd).

  • Best method to create and configure a ZFS storage pool on the 4 TB NVMe for Incus

A single NVME will only work as RAID0, so is not going to give you redundancy. If you have enough RAM consider ZFS deduplication, for certain workloads compression might work wonders. Since you lack redundancy a rock solid backup process (!) is key: manually send/receive zfs to remote devices, use syncoid as tool, automate this.

IncusOS: I’m planning to let IncusOS handle my ZFS Z2 - but I’m lacking a good backup process which is one piece holding me back.

  • Recommended way to use the Incus bridge for most containers/VMs

I always had a “br0” created from outside incus (as static as the respective host systems allowed me) and attached my containers/vms to this directly. This helped me to ensure I have at least a crystal clear understanding of this part of network.

IncusOS (I know you said incus, not IncusOS, but this is what I’m working towards): Every NIC is created as a bridge by the OS, so my eno1 is a bridge, I just attach to it.

  • How to correctly bridge specific VMs to the physical NIC for direct LAN access

For me in IncusOS this is

config: {}
description: Incus bridged profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: eno1
    type: nic

For you this could be same, with exception of eno1 being br0 (or whatever you use).

  • Suggestions for running DNS/DHCP and a reverse proxy (e.g., OPNsense/VyOS/OpenWrt container or VM) within an Incus environment

This is probably my main point. I would strongly recommend a solution such as pfSense (CE?) or any comparable (there is plenty!) tool that allows you to conveniently manage your VLANs, DHCP Servers, DNS, NTP, whatever… but most importantly your Firewall rules.

I’m using a pfSense in a VM running on my Home Server, with a single Gigabyte NIC attached as PCI pass through. On this NIC a run a number of VLANs. Attached to this is a managed Switch allowing me to direct the VLANs on its different ports (internal, testing, dmz, my uplinks, etc. …).

I use a second pfSense VM connected to the same switch, tied together with the first using XMLRPC to sync config, virtual IPs, etc.

In your case you said you have only one NIC, but prior to having a second NIC I used an USB NIC to pass through, and it worked pretty decent as well!

Network configuration has the potential to grow ridiculously complex over time, so every little bit of help will make your life easier.

IncusOS: Thats another part I have not fully figured out - but I’d hope to get it to work just alike (with PCI pass through of one of my GbE’s).

Sorry, might be slightly off topic, but maybe it helps…

2 Likes