Looking for ideas on using Incus for a homelab

I still think a native storage driver for Linstor would be interesting to get down the line.
There’s also been similar interest around Gluster.

I’m currently doing some work to support LVM on top of a shared block volume, that work is cleaning up a bunch of assumptions made by Incus around remote filesystems and should make it even easier for others to be added down the line.

That said, for anything we add support to, we’ll want there to be a solid community around it, stable releases, good security handling, …

2 Likes

That’s the one. And a comprehensive video should mention the second one as well.

On the podman side of things, it might not have been the correct place to even start that discussion. But both are run inside incus in the same way and talking about both in the same light has its merits. But with that discussion out of the way, focusing on Docker has a higher crowd interest in general.

For homelab users, It would be interesting to know if the following are doable in Incus:

  • Home Assistant, Zigbee2Mqtt, OpenHAB and the likes
  • Apprise for notification, Matrix(Synapse, Dendrite, Element), Jitsi Meet
  • Minecraft servers !?
  • Password managers and note taking apps
  • Guacamole for remote access
  • Wikis

An emphasis of IncusUI would be also helpful

1 Like

Any of these are doable. Anything that is served through a Web browser is OK. With Incus, your instances

  1. can have any type of network access (protected on a private bridge, appearing on the LAN, exposed to the Internet).
  2. can have access to other instances (one instance is the service, another is the MySQL server, as if you have multiple servers)

IncusUI currently replicates well the command-line incus tool.

At some point in the future I expect that it would be possible to setup any of these through a UI like IncusUI. But how would we be able to get there? I think that the first stage would be to figure out the steps to do these manually.

at home we are already running all services on incus and even migrated the QNAP NAS to boot with ubuntu 22 and incus to act as a backup sink for incus and other zfs datasets. Main 24/7 server is a odroidm1 with mirrored ssds, consuming < 10 W, running armbian and incus on zfs with following containers:

  • samba4 active directory
  • samba4 fileservers
  • nextcloud
  • nginx as reverse proxy
  • tvheadend to stream and record from SAT using a DIGIBIT Twin

another system at hetzner having more memory and cpu power is running

  • Joplin (sync shared notes and webclippings)
  • Zimbra (groupware)
  • jitsi-meet (web meetings)

In our office we recently managed to successfully migrate all VMs from ESXi to incus containers among others:

  • Grav CMS
  • Redmine (project management with integrated tickets, wiki, git repos)
  • Jenkins
  • Artifactory
  • Alfresco DMS
  • EspoCRM

as VMs:

  • pfsense (any box connected directly to the internet should have something like a pfsense not to expose containers directly)
  • zulip chat
  • Win10 and Win11 Testsystems

On the company servers we also use custom, independent zfs pools to be mounted into the containers and we replicate the datasets via zrepl to other locations.

The migration of the windows VMs was an unexpectedly steep learning curve but now it’s really great to just clone a snapshot in seconds and delete it after testing. This is even much easier and faster than before in vmware! What I don’t want to miss anymore is the zfs backend under the hood.

I could help creating and managing documentation, how to set these systems up but we should focus on the incus specifics since we don’t want to replicate existing how to’s and maintain them.

What I’m am still looking for are tools and how to’s to create and manage images via simplestreams. If we could create such a howto, it may promote the exchange of incus container images for specific use cases!

3 Likes

Hi,
Stable diffusion, privategpt, ollama servers can be added.
Regards.

1 Like

I am running Home Assistant Operating System as an incus VM and it works very well.

1 Like

How do you get ZFS? Do you use Ubuntu or some other distribution?
Docker benefits from ZFS 2.2

Do you do that (clone the snapshot and launch) through Incus UI or the CLI?
There’s opportunity here for some dedicated tool.

That’s right.

I’ve put together some of those topics in the Incus management section in the top post.

My understanding is that a simplestreams service is currently available as part of Incus. There’s no standalone service though it likely should be easy to make one. Are you referring to tasks such as mirroring a remote? incus copy would copy specific images.

on odroid I’m running Armbian 23 and had to compile the zfs module which is done implicitly when installing the kernel module

apt install -y zfsutils-linux zfs-dkms zfs-zed

which then installs:
zfs-2.1.9-2ubuntu1
zfs-kmod-2.1.9-2ubuntu1

On all the other systems I use Ubuntu 22.04.

I’m doing everything in CLI although I also installed the incus-ui

As far as I know incus can only consume simplestreams to be registered as a remote image repo. You could also expose your incus server and register that one as a remote on other systems but that is a bit too wide open, not simplestreams and not that easy to share.

I would like to prepare, configre, test the images in Incus instead of creating the image outside. Then exporting that image to be published on a simplestream server (e.g. nginx). The missing bits for the somehow broken workflow is how to automate

  • splitting the exported image tarbal
  • generate the json files required by the simplestreams protocol and
  • publish all the artefacts to a public web server which may be secured by basic auth on artefact level

If we could configure incus in a way that a user having access to a specific project has only read access on the images, not being able to antything else that would be also fine. In that case we have to automate the way how to get external user certs trusted without manual interaction.

Goal is to allow people to maintain specific images having configured use cases and export that as a public image to be updated regularily. Maybe I’m on the wrong road and that kind of images should be build outside of incus using tools like distrobuilder? I like the way to create the “template” on a running system and automate cleanup and image build from that “master” system. That’s what we did in the past for OVAs when there are only 10-20 consumers but a lot work to test and maintain.

1 Like

Sometimes in homelab it happens that we need to compile certain packages or patches. I had to do it for a package recently. Incus makes it easy to just spin up a container or vm, compile or cross compile and move on. This could be an use case to list.

One more plus is, its possible to spin up a aarch on a amd64 Incus. So, if there is something aarch specific, then its already covered by Incus.

And could you please consider adding under ?

  • firewall: OPNsense, Vyos and IPFire
  • collaboration: Overleaf
  • VPN: Headscale
  • DNS: bind9
  • web server: httpd(apache2)
  • Incus management: Terraform, OpenTofu
1 Like

I use that a lot; the main point is that you would not want to install development packages on the host but try to keep as clean as possible. Put anything dev-related into instances.
Also --ephemeral is convenient so that the instance gets deleted as soon as you stop it.

How do you do that?

$ incus launch images:alpine/edge/arm64
Launching the instance
Error: Failed instance creation: Failed creating instance record: Requested architecture isn't supported by this host
$ incus launch images:alpine/edge/arm64 --vm
Launching the instance
Error: Failed instance creation: Failed creating instance record: Requested architecture isn't supported by this host

yes, --ephemeral is very convenient in such cases. I can imagine also using the --ephemeral for throw away instances, like sandboxed browser access to retrieve links or files, testing out patches, etc.,

My bad, its supposed be ‘or’ and not ‘on’. Cross architectural support is not available yet, but can be used in cluster. stgraber explains about both of this point in latest live stream: https://youtu.be/SiWgwJ8ZH88?t=2235
With 0.6 release, its now possible to copy containers to different architectures for backup.

I’ll add a link to this part. It looks like a distrobuilder config file could automate the creation of custom images, but if the customizations are too extensive, it would be worth to make images out of instances.

I found this instructional video to be super helpful and it also applies to Incus. However, I can see how newer Incus users might overlook the videos in this YouTube channel since they refer specifically to LXD and not Incus.

This one is a top priority. It has even been referenced earlier in this thread as well.
I’ll post my take on my blog, trying to bring up a different (if possible) point of view. Then, when someone consumes both sources, there will be 0 chances of misunderstanding or unanswered questions.

I am happy with the style of this tutorial about Windows VMs, How to run a Windows virtual machine on Incus on Linux – Mi blog lah! Initially it was a simple remake of the old LXD post. Then, new important material appeared and they had to be added as Bonus material. Also some other material was squeezed into the Troubleshooting section. This mean the material has matured, and as long as there is no new material to add, it is ripe for a final rewrite.

I watched the video in detail. I have written a few posts on these before. They need an eventual update towards Incus.

Specifically,

  1. Routed subnet to private network:
  2. macvlan: How to make your LXD containers get IP addresses from your LAN using macvlan – Mi blog lah!
  3. bridged: How to make your LXD containers get IP addresses from your LAN using a bridge – Mi blog lah!
  4. proxy device: How to use the LXD Proxy Device to map ports between the host and the containers – Mi blog lah!
  5. proxy device with Unix sockets: How to manage LXD from within one of its containers – Mi blog lah!
  6. proxy device with proxy_protocol: How to use the LXD Proxy Device to map ports between the host and the containers – Mi blog lah!
  7. proxy device with isolated container: A network-isolated container in LXD – Mi blog lah!

Bonus posts on

  1. ipvlan: How to get LXD containers obtain IP from the LAN with ipvlan networking – Mi blog lah!
  2. routed: How to get LXD containers get IP from the LAN with routed network – Mi blog lah!

Also, extended thematic tutorials

  1. How to Set Up Multiple WordPress Sites with LXD Containers | Linode Docs
    This one shows how to setup a VPS with LXD, then launch an instance with a reverse proxy and another one with the SQL server. Finally, for each customer (or group of websites) launch an instance and put in there their WordPress(s) websites. With nginx as a reverse proxy, you install and forget because the Let’s Encrypt certificates are updated automatically. I think some members of this discussion forum are running such a setup.

I would like to work on these through GSoD, failing that I would try with Linode/DO or whoever is still interested in new documentation.

1 Like

I think it would be also worth being up front with the homelab-related caveats with Incus - the main one that occurs to me is that to all intents and purposes you can’t live migrate a Linux container, but it may also be worth talking about the pluses and minuses of clustering in a homelab. For me the main one was you have to maintain a quorate, and over time I found the cluster became unhappy having members turned off for long periods (weeks, months, …) or having changing members of that quorate, which I do for power and cost saving reasons. These problems are shared by Proxmox so it’s not a showstopper, but it’s worth being aware of given the current need for many to save operational costs at home.

Other caveats/considerations/warnings - is Docker swarm still an issue with LXCs?

Are VMs necessary for k8s nodes as seems to be implied in the first discussion?

Live migration for VMs works, but not so much with containers. I have not used either. It’s easier to do the migration the manual way. Especially when your instance runs on an old version of a Linux distribution and you want to refresh with something newer. In addition, if you have a WordPress site, you can easily use a plugin that backups the whole lot (DB+files), and then import on the new site.

Are referring to this? Typically you put these services in VMs in the top post? I was trying to set the stage that for a homelab you either had all services installed on the same system (no VMs, no containers), and then you would use ESXi with VMs.

Sorry for the lack of context. These topics (like my earlier posts) were suggestions for things to add to a general post/collection of posts that would support new homelab users who wanted to use Incus. For those coming from Proxmox, say, answers to such questions would be useful.

1 Like