After many search, I still cannot figure out how to deal with this : I have an Incus host from which I create some testing VMs, backed up with a zfs pool. Inside the incus VM I’d like to create incus OCI containers and re-use the initial pool created in the host, in order to avoid a “pool inside pool” scenario. I am aware of the zfs.delegate option, but am not quite sure how to use it.
Should I create an incus storage volume of type block or filesystem ?
Must I mount the volume inside the container or just attach it ?
And finally is this beneficial/cleaner to have only one pool shared between VMs /instances ?
Having the exact same storage pool on the host installation of Incus, and in a VM installation of Incus sounds like a bad idea. Any sharing of the exact same storage pool between two installations of Incus should be bad, unless the storage driver allows such sharing.
ZFS delegates won’t be applicable for VM instances. It’s only valid for OCI and system container instances.
You could add the Incus node on the host as a remote inside the guest VM but there would be little point in doing this.
If your testing doesn’t need two separate kernels running then it’d be simpler to just use a system container instance and run another Incus node within that. It would make it possible to use ZFS delegation to share a pool as well (separate datasets however).
So as I understand, with the OCI containers within a VM, there is little benefit of using a common pool (with different datasets of course). No performance penalty in having a pool within a pool then ?
I might as well test your suggestion SirGiggles and start with a system container instead, but can I just create my system container, then create a custom storage with delegation enabled, and then mount it inside the container ? If that’s correct, should I mount it somewhere like /var/lib/incus, or can I simply pass it without mounting it ? Sorry if that might seems like dumb quesions, I am struggling a bit to understand all this, between incus and zfs which are new to me.
My whole point was to launch OCI containers for within another container, without docker/podman to have a full incus solution without having multiple zpools to manage, if that makes sense.
Wonder why you want to run OCI containers in a separate System container if Incus from stable branch supports this out of the box?
Incus stable release, version 6.15 at time of writing, has full support for OCI containers. You launch them like you do for system containers. You can assign the same devices like you do for system containers without worry about dependencies as such. It just magically works
Of course you can do the same by creating a system container, attach an additional storage volume with delegation enabled to it, install Incus inside your container and perform the same steps like you would do on the host. However, this approach comes with some draw backs like more complex networking configuration if you want to reach them from the host and properly other items.
Would be nice if you can elaborate why you want to run it in a system container and not on the host system?
My situation is the following : my Incus host is on a untagged vlan 20, and I’d like to have a caddy reverse proxy on tagged vlan 40 which would proxy traffic to private bridged containers (actually caddy + home assistant + mosquitto + z2m). Besides that, I’d like to run a few other containers on tagged vlan 40, like sftpgo etc.
I have a linux bridge with vlan filtering enabled so that I can connect an Incus container either to untagged vlan or tagged 40, with external dhcp (thanks to Incus networking for network engineers).
I thought to achieve that with 1 host and n containers, be it system containers or oci ones. However I can’t figure out how to have an Inucs bridge network with nat enabled and at the same time a caddy container which would connect to both tagged vlan 40 and Incus bridge network to route traffic from the Incus bridge network to vlan 40 (Add second network interface to OCI instance seems to suggest it is a bad idea to have multiple nics each with dhcp).
Then I went with the 2 layers virtualization path (first layer to connect to the appropriate vlan, and the second layer to start application containers), hence the initial question.
Nothing is impossible and Incus is flexible enough to get you there. It requires of course some special configurations like in your case. Having more than one interface attached to an COI container requires a manual approach but it is now much simpler to archive. (Add second network interface to OCI instance) was written around version 6.4 and a few things have changed since than, take a look at Incus proxy bind to host address issue how this can be accomplished today.
It still requires to write a script which performs the required steps but it doesn’t requires to touch the OCI image anymore. It mainly depends how the OCI starts and what kind of hooks are available to mount your script into it. You just have to make sure it runs before the OCI application is started.
From a quick look at official caddy OCI it is Alpine based and starts caddy directly. In this case a script is needed to configure the second NIC first and start caddy. Shouldn’t be that difficult…
I might as well test your suggestion SirGiggles and start with a system container instead, but can I just create my system container, then create a custom storage with delegation enabled, and then mount it inside the container ? If that’s correct, should I mount it somewhere like /var/lib/incus, or can I simply pass it without mounting it ? Sorry if that might seems like dumb quesions, I am struggling a bit to understand all this, between incus and zfs which are new to me.
You questions has been solved so I won’t touch on that, but for this specific part, all you would need to do is set zfs.delegate=true on a volume on the host. After wards from within the instance, you would install the required ZFS utilities, create the dataset, and set the mount point (zfs set mountpoint=/var/lib/incus your-dataset) accordingly.