Trying out Incus OS

As you may know, we’ve been busy building our own immutable Linux image to act as an ideal platform to run Incus.

You can find more details about it here: GitHub - lxc/incus-os: Immutable Linux OS to run Incus

We now have pretty reliable images that can be used to check out this work, though it’s strongly recommended that those not be used in production yet as we don’t have a strict release cadence and handling of security updates at this point in time.

To make it easy for folks to try out, we’ve now released a custom image generator.

This basically allows for very easily creating an Incus OS install or runtime image by just answering a few questions and then downloading your customized image directly from our servers.

This can now be found at: https://incusos-customizer.linuxcontainers.org

If you want to try this out as an Incus VM, we’d recommend downloading an installation ISO with your Incus client certificate added to it.

You can then use that ISO with:

incus create incus-os -c limits.cpu=4 -c limits.memory=4GiB -c security.secureboot=false -d root,size=50GiB --empty --vm
incus config device add incus-os vtpm tpm
incus config device add incus-os install disk source=/PATH/TO/THE/DOWNLOAD.iso boot.priority=10
incus start incus-os
sleep 5s
incus console incus-os --type=vga

Once installed, run:

incus stop incus-os
incus config device remove incus-os install
incus start incus-os --console=vga

This will remove the install media and boot the installed system.

Once functional, you can add this server as a remote with incus remote add and start interacting with it.

If installing on real hardware, you’ll need to either switch Secure Boot to Setup Mode so we can perform an automatic key enrollment, or you can manually load our KEK and DB keys which are available in the EFI keys folder as DER files.

11 Likes

And in addition to that, all online demo sessions are now also running on Incus OS.

4 Likes

What distro is it based on? Any chance on having a shell? How is one supposed to troubleshoot, monitor and maintain it if there is no shell?

It’s based on Debian.
We have no plans to provide a shell on it, no.

Incus’ OpenMetrics endpoint is configured to capture all system metrics (node-exporter), so you monitoring through Prometheus and Grafana works fine.

We also have some REST APIs to fetch details about the OS itself and to configure it which currently includes an API to fetch the systemd journal, though similar to the metrics, I think the goal down the line is to allow for that to be automatically sent out to an external log gathering service like Loki rather than have users hammer the API.

I vaguely recall from the development livestreams that this was using systemd’s mkosi. If so, then is it possible to use systemd-sysext to add in additional features without modifying the base image?

Well, no shell is certainly a noble goal, but I’ve been operating several dozen clusters and individual lxd server instances for years and in my experience there is always something to do, an export fails and leaves partial archive in the export dir which then causes subsequent exports to fail because there is not enough space, every so often you have to rearrange zfs pools and filesystems, swap failed disks, things like that.

Not to mention that it is often useful to look at the server as one big machine, for example I run atop on the host to quickly identify who is using cpu or disk and then figure out the exact container later. I also have scripts to go through /proc and look for specific processes for which I then really care and handle as a class (like JVMs, phps, etc,), regardless of the container they are running in.

So, yes, it is certainly possible not to have a shell, but is damn comfortable to have one :wink:

Yes, that’s how Incus itself is installed and we have some more applications in the pipeline. Though sysext in our case must be signed by the same key that signed the main image, so the end user can’t just build and load their own extensions.

Yeah, it will certainly force us to fix a number of issues in Incus which folks are mostly silently working around right now :slight_smile:

We don’t expect it will be a good fit for everyone, traditional Incus installation as a package on a regular distro isn’t going anywhere!

I would actually like it to replace traditional installations, so that I don’t have to worry if my kernel/zfs and other tooling is (still) good enough to run everything. Or maybe too good, as I once upgraded zfs too early, that also didn’t end well.

on that note, how is incus OS keeping itself up to date? Does it?

It does, it checks the image server every few hours for an update, it automatically update Incus itself and for the OS it prepares the update and shows you a message telling you to reboot the system when convenient.

Would portable services then be a viable alternative for non-signed extensions?

No, we don’t want to allow any unsigned code be run in the environment.
It’s a very deliberate design decision, not a limitation of sysexts that we’re looking at bypassing.

Is it part of the long term goals to provide arm64/aarch64 images?

Yeah, it’s actually a medium term goal as I do have arm64 servers at home that I’d like to move to Incus OS.

We’ll probably soon add support for running on arm64 so long as it’s also booting under UEFI Secure Boot and with a TPM 2.0 module. That will work fine for the production servers I’m dealing with but won’t be terribly useful to folks wanting to run it on a Raspberry Pi.

Getting things working on consumer/embedded Arm hardware may be quite a bit trickier to have the same security guarantees we strive for.

1 Like

Then, as an example, would Tailscale network integration need to be packaged and signed by you?

Correct, we have an issue to add it as a system service already.

1 Like

Will there be support to add nvidia drivers?

I’m assuming zfs, btrfs, lvm, etc will still be supported as well?

I’m going to spin up a test VM to test out. Thanks for the effort. :slight_smile:

That one is a tricky one due to legal issues with including the binary NVIDIA driver.

But the good news is that there’s recently been a lot of progress on both NVIDIA and AMD GPU virtualization for the mainline kernel, so we’re hoping to be able to provide good support for GPU passthrough and GPU virtualization on those platforms without having to include binary drivers directly.

We technically have the tools and kernel support for all three in our current images.

But our recommendation is to rely on ZFS for all local storage which is what we automatically set up for you and what we will shortly easy management and expansion through our storage API (currently under review).

For external storage (we support Fiber Channel, iSCSI and NVMEoTCP), then we have full support for LVM clustering on shared block devices.

We’re also planning on including client support for both Ceph and Linstor in the near future (as well as remote TrueNAS scale support once that driver lands in Incus 6.16).

1 Like

Thanks for the reply.

Yeah, that’s great for the newer model Nvidia drivers, but not so much the older one’s. I have a 1080ti that works great for Plex/Jellyfin. I guess self build will be a better option or going with Intel/AMD with the built in kernel modules.

Do you have an iGPU as well?

If so, you may be able to just do GPU passthrough of your 1080ti to a VM and then use it from there with whatever driver you want.