Looking for ideas on using Incus for a homelab

There’s a very nice video from @stgraber on how to expose lxd services. It would be nice to remake that for incus and also give some extra information on setting up a bridge which he describes as one of the best methods, but did not show up how to set it up in at least one distro.

1 Like

Podman indeed had benefits on Docker when running on the host machine. Once you’re isolated inside an incus container, those benefits are mostly irrelevant and Docker with compose and extensive community far outweighs it.

1 Like

Also, considering that Incus may get at some point support for OCI images.
I’ll leave it on the list, it looks low priority, unless there is a compelling use-case.

I think I have updated the top post with all points, up to this point (pun not intended).

I use Podman at work. One nice thing about it is that you can easily wrap containers into a systemd service. You can also manage them in the same way as you would on Kubernetes. This makes it easy to migrate them to a production Kubernetes cluster. With that you don’t really need Docker compose.

But, yeah, that is all work related stuff. Maybe not so important for homelabs.

Is this the video you were thinking of?

There is another good one on network forwards.


All the options I gave are self-hosted DNS. They mostly leverage Unbound under the hood but “the people” have spoken and they like a nice GUI. I’m afraid - and I say this as someone who actually owned the grasshopper book on DNS and BIND - the days of BIND in homelabs are over except for the diehards. All three options also do DHCP, by the way, and adblocking through dynamic lists.

Ah Docker, my old nemesis. The problem is exactly that there doesn’t seem to be just one post that tells you what to do. Next point is that the changes in ZFS 2.2 allow the use of overlay2 in a more performant way which has never been properly documented or Incus or LXD (that I have seen) :
eg Reddit - Dive into anything

Lots of folks say it can now be used and there is general carousing, but no one explains how to enable it properly in one shot. So many confusing options and pitfalls :

And why not cover shiftfs too :

I see some evangelists here pushing Podman, but unfortunately the whole homelab crowd is basically not interested. It’s a shame, but there it is. I would also expect that anyone wanting Podman would be capable of deploying it themselves, whereas the typical Docker crowd will require some handholding - even me, post the ZFS changes. :frowning: For example, where should the the persisted data for the Docker containers be stored after you’ve configured ZFS delegates? Is it different for volumes and mounts (presumably not)?

1 Like

Since Ceph was mentioned, I would like to suggest Linstor.

In my opinion, its strategy is simpler and more performant than Ceph in the specific context of distributed high-available storage for LXC containers and virtual machines.

Does this describe how Linstor would be used in Incus? Lxd with Linstor storage?

I still think a native storage driver for Linstor would be interesting to get down the line.
There’s also been similar interest around Gluster.

I’m currently doing some work to support LVM on top of a shared block volume, that work is cleaning up a bunch of assumptions made by Incus around remote filesystems and should make it even easier for others to be added down the line.

That said, for anything we add support to, we’ll want there to be a solid community around it, stable releases, good security handling, …


That’s the one. And a comprehensive video should mention the second one as well.

On the podman side of things, it might not have been the correct place to even start that discussion. But both are run inside incus in the same way and talking about both in the same light has its merits. But with that discussion out of the way, focusing on Docker has a higher crowd interest in general.

For homelab users, It would be interesting to know if the following are doable in Incus:

  • Home Assistant, Zigbee2Mqtt, OpenHAB and the likes
  • Apprise for notification, Matrix(Synapse, Dendrite, Element), Jitsi Meet
  • Minecraft servers !?
  • Password managers and note taking apps
  • Guacamole for remote access
  • Wikis

An emphasis of IncusUI would be also helpful

1 Like

Any of these are doable. Anything that is served through a Web browser is OK. With Incus, your instances

  1. can have any type of network access (protected on a private bridge, appearing on the LAN, exposed to the Internet).
  2. can have access to other instances (one instance is the service, another is the MySQL server, as if you have multiple servers)

IncusUI currently replicates well the command-line incus tool.

At some point in the future I expect that it would be possible to setup any of these through a UI like IncusUI. But how would we be able to get there? I think that the first stage would be to figure out the steps to do these manually.

at home we are already running all services on incus and even migrated the QNAP NAS to boot with ubuntu 22 and incus to act as a backup sink for incus and other zfs datasets. Main 24/7 server is a odroidm1 with mirrored ssds, consuming < 10 W, running armbian and incus on zfs with following containers:

  • samba4 active directory
  • samba4 fileservers
  • nextcloud
  • nginx as reverse proxy
  • tvheadend to stream and record from SAT using a DIGIBIT Twin

another system at hetzner having more memory and cpu power is running

  • Joplin (sync shared notes and webclippings)
  • Zimbra (groupware)
  • jitsi-meet (web meetings)

In our office we recently managed to successfully migrate all VMs from ESXi to incus containers among others:

  • Grav CMS
  • Redmine (project management with integrated tickets, wiki, git repos)
  • Jenkins
  • Artifactory
  • Alfresco DMS
  • EspoCRM

as VMs:

  • pfsense (any box connected directly to the internet should have something like a pfsense not to expose containers directly)
  • zulip chat
  • Win10 and Win11 Testsystems

On the company servers we also use custom, independent zfs pools to be mounted into the containers and we replicate the datasets via zrepl to other locations.

The migration of the windows VMs was an unexpectedly steep learning curve but now it’s really great to just clone a snapshot in seconds and delete it after testing. This is even much easier and faster than before in vmware! What I don’t want to miss anymore is the zfs backend under the hood.

I could help creating and managing documentation, how to set these systems up but we should focus on the incus specifics since we don’t want to replicate existing how to’s and maintain them.

What I’m am still looking for are tools and how to’s to create and manage images via simplestreams. If we could create such a howto, it may promote the exchange of incus container images for specific use cases!


Stable diffusion, privategpt, ollama servers can be added.

1 Like

I am running Home Assistant Operating System as an incus VM and it works very well.

1 Like

How do you get ZFS? Do you use Ubuntu or some other distribution?
Docker benefits from ZFS 2.2

Do you do that (clone the snapshot and launch) through Incus UI or the CLI?
There’s opportunity here for some dedicated tool.

That’s right.

I’ve put together some of those topics in the Incus management section in the top post.

My understanding is that a simplestreams service is currently available as part of Incus. There’s no standalone service though it likely should be easy to make one. Are you referring to tasks such as mirroring a remote? incus copy would copy specific images.

on odroid I’m running Armbian 23 and had to compile the zfs module which is done implicitly when installing the kernel module

apt install -y zfsutils-linux zfs-dkms zfs-zed

which then installs:

On all the other systems I use Ubuntu 22.04.

I’m doing everything in CLI although I also installed the incus-ui

As far as I know incus can only consume simplestreams to be registered as a remote image repo. You could also expose your incus server and register that one as a remote on other systems but that is a bit too wide open, not simplestreams and not that easy to share.

I would like to prepare, configre, test the images in Incus instead of creating the image outside. Then exporting that image to be published on a simplestream server (e.g. nginx). The missing bits for the somehow broken workflow is how to automate

  • splitting the exported image tarbal
  • generate the json files required by the simplestreams protocol and
  • publish all the artefacts to a public web server which may be secured by basic auth on artefact level

If we could configure incus in a way that a user having access to a specific project has only read access on the images, not being able to antything else that would be also fine. In that case we have to automate the way how to get external user certs trusted without manual interaction.

Goal is to allow people to maintain specific images having configured use cases and export that as a public image to be updated regularily. Maybe I’m on the wrong road and that kind of images should be build outside of incus using tools like distrobuilder? I like the way to create the “template” on a running system and automate cleanup and image build from that “master” system. That’s what we did in the past for OVAs when there are only 10-20 consumers but a lot work to test and maintain.

1 Like

Sometimes in homelab it happens that we need to compile certain packages or patches. I had to do it for a package recently. Incus makes it easy to just spin up a container or vm, compile or cross compile and move on. This could be an use case to list.

One more plus is, its possible to spin up a aarch on a amd64 Incus. So, if there is something aarch specific, then its already covered by Incus.

And could you please consider adding under ?

  • firewall: OPNsense, Vyos and IPFire
  • collaboration: Overleaf
  • VPN: Headscale
  • DNS: bind9
  • web server: httpd(apache2)
  • Incus management: Terraform, OpenTofu
1 Like

I use that a lot; the main point is that you would not want to install development packages on the host but try to keep as clean as possible. Put anything dev-related into instances.
Also --ephemeral is convenient so that the instance gets deleted as soon as you stop it.

How do you do that?

$ incus launch images:alpine/edge/arm64
Launching the instance
Error: Failed instance creation: Failed creating instance record: Requested architecture isn't supported by this host
$ incus launch images:alpine/edge/arm64 --vm
Launching the instance
Error: Failed instance creation: Failed creating instance record: Requested architecture isn't supported by this host

yes, --ephemeral is very convenient in such cases. I can imagine also using the --ephemeral for throw away instances, like sandboxed browser access to retrieve links or files, testing out patches, etc.,

My bad, its supposed be ‘or’ and not ‘on’. Cross architectural support is not available yet, but can be used in cluster. stgraber explains about both of this point in latest live stream: https://youtu.be/SiWgwJ8ZH88?t=2235
With 0.6 release, its now possible to copy containers to different architectures for backup.