Looking for ideas on using Incus for a homelab

Stable diffusion, privategpt, ollama servers can be added.

1 Like

I am running Home Assistant Operating System as an incus VM and it works very well.

1 Like

How do you get ZFS? Do you use Ubuntu or some other distribution?
Docker benefits from ZFS 2.2

Do you do that (clone the snapshot and launch) through Incus UI or the CLI?
There’s opportunity here for some dedicated tool.

That’s right.

I’ve put together some of those topics in the Incus management section in the top post.

My understanding is that a simplestreams service is currently available as part of Incus. There’s no standalone service though it likely should be easy to make one. Are you referring to tasks such as mirroring a remote? incus copy would copy specific images.

on odroid I’m running Armbian 23 and had to compile the zfs module which is done implicitly when installing the kernel module

apt install -y zfsutils-linux zfs-dkms zfs-zed

which then installs:

On all the other systems I use Ubuntu 22.04.

I’m doing everything in CLI although I also installed the incus-ui

As far as I know incus can only consume simplestreams to be registered as a remote image repo. You could also expose your incus server and register that one as a remote on other systems but that is a bit too wide open, not simplestreams and not that easy to share.

I would like to prepare, configre, test the images in Incus instead of creating the image outside. Then exporting that image to be published on a simplestream server (e.g. nginx). The missing bits for the somehow broken workflow is how to automate

  • splitting the exported image tarbal
  • generate the json files required by the simplestreams protocol and
  • publish all the artefacts to a public web server which may be secured by basic auth on artefact level

If we could configure incus in a way that a user having access to a specific project has only read access on the images, not being able to antything else that would be also fine. In that case we have to automate the way how to get external user certs trusted without manual interaction.

Goal is to allow people to maintain specific images having configured use cases and export that as a public image to be updated regularily. Maybe I’m on the wrong road and that kind of images should be build outside of incus using tools like distrobuilder? I like the way to create the “template” on a running system and automate cleanup and image build from that “master” system. That’s what we did in the past for OVAs when there are only 10-20 consumers but a lot work to test and maintain.

1 Like

Sometimes in homelab it happens that we need to compile certain packages or patches. I had to do it for a package recently. Incus makes it easy to just spin up a container or vm, compile or cross compile and move on. This could be an use case to list.

One more plus is, its possible to spin up a aarch on a amd64 Incus. So, if there is something aarch specific, then its already covered by Incus.

And could you please consider adding under ?

  • firewall: OPNsense, Vyos and IPFire
  • collaboration: Overleaf
  • VPN: Headscale
  • DNS: bind9
  • web server: httpd(apache2)
  • Incus management: Terraform, OpenTofu
1 Like

I use that a lot; the main point is that you would not want to install development packages on the host but try to keep as clean as possible. Put anything dev-related into instances.
Also --ephemeral is convenient so that the instance gets deleted as soon as you stop it.

How do you do that?

$ incus launch images:alpine/edge/arm64
Launching the instance
Error: Failed instance creation: Failed creating instance record: Requested architecture isn't supported by this host
$ incus launch images:alpine/edge/arm64 --vm
Launching the instance
Error: Failed instance creation: Failed creating instance record: Requested architecture isn't supported by this host

yes, --ephemeral is very convenient in such cases. I can imagine also using the --ephemeral for throw away instances, like sandboxed browser access to retrieve links or files, testing out patches, etc.,

My bad, its supposed be ‘or’ and not ‘on’. Cross architectural support is not available yet, but can be used in cluster. stgraber explains about both of this point in latest live stream: https://youtu.be/SiWgwJ8ZH88?t=2235
With 0.6 release, its now possible to copy containers to different architectures for backup.

I’ll add a link to this part. It looks like a distrobuilder config file could automate the creation of custom images, but if the customizations are too extensive, it would be worth to make images out of instances.

I found this instructional video to be super helpful and it also applies to Incus. However, I can see how newer Incus users might overlook the videos in this YouTube channel since they refer specifically to LXD and not Incus.

This one is a top priority. It has even been referenced earlier in this thread as well.
I’ll post my take on my blog, trying to bring up a different (if possible) point of view. Then, when someone consumes both sources, there will be 0 chances of misunderstanding or unanswered questions.

I am happy with the style of this tutorial about Windows VMs, How to run a Windows virtual machine on Incus on Linux – Mi blog lah! Initially it was a simple remake of the old LXD post. Then, new important material appeared and they had to be added as Bonus material. Also some other material was squeezed into the Troubleshooting section. This mean the material has matured, and as long as there is no new material to add, it is ripe for a final rewrite.

I watched the video in detail. I have written a few posts on these before. They need an eventual update towards Incus.


  1. Routed subnet to private network:
  2. macvlan: How to make your LXD containers get IP addresses from your LAN using macvlan – Mi blog lah!
  3. bridged: How to make your LXD containers get IP addresses from your LAN using a bridge – Mi blog lah!
  4. proxy device: How to use the LXD Proxy Device to map ports between the host and the containers – Mi blog lah!
  5. proxy device with Unix sockets: How to manage LXD from within one of its containers – Mi blog lah!
  6. proxy device with proxy_protocol: How to use the LXD Proxy Device to map ports between the host and the containers – Mi blog lah!
  7. proxy device with isolated container: A network-isolated container in LXD – Mi blog lah!

Bonus posts on

  1. ipvlan: How to get LXD containers obtain IP from the LAN with ipvlan networking – Mi blog lah!
  2. routed: How to get LXD containers get IP from the LAN with routed network – Mi blog lah!

Also, extended thematic tutorials

  1. How to Set Up Multiple WordPress Sites with LXD Containers | Linode Docs
    This one shows how to setup a VPS with LXD, then launch an instance with a reverse proxy and another one with the SQL server. Finally, for each customer (or group of websites) launch an instance and put in there their WordPress(s) websites. With nginx as a reverse proxy, you install and forget because the Let’s Encrypt certificates are updated automatically. I think some members of this discussion forum are running such a setup.

I would like to work on these through GSoD, failing that I would try with Linode/DO or whoever is still interested in new documentation.

1 Like

I think it would be also worth being up front with the homelab-related caveats with Incus - the main one that occurs to me is that to all intents and purposes you can’t live migrate a Linux container, but it may also be worth talking about the pluses and minuses of clustering in a homelab. For me the main one was you have to maintain a quorate, and over time I found the cluster became unhappy having members turned off for long periods (weeks, months, …) or having changing members of that quorate, which I do for power and cost saving reasons. These problems are shared by Proxmox so it’s not a showstopper, but it’s worth being aware of given the current need for many to save operational costs at home.

Other caveats/considerations/warnings - is Docker swarm still an issue with LXCs?

Are VMs necessary for k8s nodes as seems to be implied in the first discussion?

Live migration for VMs works, but not so much with containers. I have not used either. It’s easier to do the migration the manual way. Especially when your instance runs on an old version of a Linux distribution and you want to refresh with something newer. In addition, if you have a WordPress site, you can easily use a plugin that backups the whole lot (DB+files), and then import on the new site.

Are referring to this? Typically you put these services in VMs in the top post? I was trying to set the stage that for a homelab you either had all services installed on the same system (no VMs, no containers), and then you would use ESXi with VMs.

Sorry for the lack of context. These topics (like my earlier posts) were suggestions for things to add to a general post/collection of posts that would support new homelab users who wanted to use Incus. For those coming from Proxmox, say, answers to such questions would be useful.

1 Like

Related to this topic is the distribution or provisioning of special purpose containers. For example, Docker provides Docker Hub, a registry of application container images. Incus might also use this approach or instead distribute scripts that completely provision an application starting with a base operating system container. These scripts might be implemented in Python, Ansible, or Terraform (OpenTofu).

Did you install Caddy in an Incus or LXD container? Did you install it using a Linux distribution package manager or as a Docker or Podman image?

I use SFTPGo for home network “cloud” file storage. It serves a purpose similar to NextCloud, but is much simpler.

1 Like

I generally run Debian 12 as my host operating system. Then I install Caddy from the stable Debian apt repository.

After I get Incus and networking setup, I use Caddy as a reverse proxy to any Incus instances that need to be exposed to the open Internet.

Running some services directly on the host makes a lot of sense for my use case. Of course, a lot of other solutions would also work.

For example you could run Caddy in an Incus container on the same network as your other services and use an Incus network forward.

I would like to replace SWAG with Caddy because SWAG is available only as a Docker container image and is designed by default to work with Docker container images at LinuxServer.io. Before I read your comment about Caddy, I was considering installing Nginx Proxy Manager, but it, too, is available only as a Docker container image whereas Caddy is available in the Debian and Ubuntu package repositories and sounds like a well designed Web server that is easier to configure than Nginx and Apache.

I will likely run Caddy directly on one of my Raspberry Pi 4s unless I can think of a good reason to run it in a container. Presently, I run several Docker containers on this RPi4, including a SWAG container, but I wish to move these applications (except SWAG) from the RPi4 to a more powerful Minsforum UM690 mini PC (AMD Ryzen 9 6900HX) that runs Proxmox and Incus. I think the RPi4 will serve well as a Caddy reverse proxy server and it will give the RPi4 something to do. :slight_smile: