Hi!
Virtual machines and containers (either system containers or application containers - OCI) offer isolation from your host (your server, etc). This isolation is crucial and it’s what everyone desires. Don’t need a service anymore? Nuke the instance and it’s all gone. No messing up with the host files and configuration.
A virtual machine requires some hardware (virtualization) features of your CPU to work, and it’s a bit heavy when you try to run several of them on the same computer.
Is it possible to have most of the benefits of virtual machines but at lower cost in terms of resources? Sure it is. Containers do not require hardware virtualization features. Containers only require security features of the Linux kernel and this makes them very lightweight. You can have dozens of containers for each VM.
As a side note, the security features of the Linux kernel that are used in Linux containers are namespaces
, seccomp
, capabilities
and cgroups
. Anyone can implement Linux containers and many have created even educational implementations. barco
is one such educational implementation. As a side note, initially users were asking from kernel developers to add a Linux container subsystem to the Linux kernel. The kernel developers were not having that. They replied, sorry, you get security primitives instead, and based on those primitives you make your Linux container implementations. In fact it was a very mature response.
You are using Incus because you would want a single tool with a service that would manage all (VMs, system containers and applications containers) in the same way. If you were to use a different tool for each, the learning curve would be too steep.
When you launch a VM or a container, you can starting off from an image. These are provided either by https://images.linuxcontainers.org (this project) or the OCI repository you have enabled. The images from this project are only generic images that are created directly from Linux distributions. You can look for distrobuilder
which is the tool that creates them, and even re-create them by yourself with the same public configuration. In that sense, what you get is what comes from your chosen Linux distribution.
On the other hand, with OCI images you can either use official OCI images (i.e. generated by Docker), or you have access to user-uploaded images. I think it’s obvious that if you select a user-uploaded image, you are making the choice whether what’s inside is good for you.
A running VM or container are called collectively instances. The rule of thumb is that you would use one service per instance, and you would prefer containers over VMs (unless there are strong reasons against the use of containers for this use-case). It’s mostly OK if you run everything in a single instance. It is neater if you separate the services into different instances. Before VMs, system administrators had to use separate physical servers which were expensive. In those times, some had to put different services into a single physical server because it would get very expensive otherwise. But now with all these virtual instances, it’s too affordable not to put different services in different instances.
Security is an overloaded term and may mean different things to different people. The standard view is that VMs are more secure because you get hardware virtualization/separation from the host. Instead, containers are isolated using software only, through features of the Linux kernel. There may be bugs in the Linux kernel that will break this isolation.
However, the realist view is that at the moment there are no known lingering security vulnerabilities in the Linux kernel, and if one is found, it will be fixed. But since a few years ago there have been some hardware bugs in CPUs that would defeat the isolation of VMs. Run the following to see what mitigations are used for your CPU to avoid those vulnerabilities. Spectre is not a movie.
cat /proc/cpuinfo | grep ^bugs
There has been a discussion on how to write Incus documentation that is useful for Homelab users. It became obvious that writing documentation for each service would merely replicate what is available from the official documentation of said service. The solution should be similar to what the kernel developers did with container support in the Linux kernel. That is, discuss the primitives like networking, storage devices, how to get instances to communicate with each other, etc.