Is there a "design pattern" for incus / zfs?

Hello there,

in the next months i will rebuild my homelab on new hardware.

On the current system i manage the zfs pool and datasets by myself, and why ever… they are passed using the dir driver.

I have one dataset for incus to store all containers. I think its better to have a dataset for each container/vm.

When possible or practicable i have sepparated the data into extra datasets, passed to the containers also using the dir driver, to be able to restore the container without touching its specific data used.

However, now I sometimes ran into an I/O-Bottleneck that slows down the whole system, due to zfs filesystem-writes, while the cpu is nearly in idle state…

I have read that the dir driver is not the best choice in case of performance and that it should be a better to use the incus zfs driver.

While setting up the new homelab i want to do that in the best/correct way for performance and usability.

So my question is, is there something like a “design pattern” for setting up the storage for incus, using zfs?

Looking forward to your feedback.

I would let Incus mange the datasets for you. When creating a new Incus storage pool you can create it using the zfs storage driver. I recommend passing in a full disk or a full disk partition. This is important for performance.

incus storage create pool6 zfs source=/dev/sdX

See: How to manage storage pools - Incus documentation

After you start using the storage pool, you can explore what it creates by using the standard zfs cli commands.

Once you have the pool setup, I recommend also creating two volumes for your backups and images.

You can configure Incus to use these volumes instead of using what ever filesystem you use for /var/lib/incus.

See: storage.backups_volume and storage.images_volume at Server configuration - Incus documentation

All of this works pretty well for me. I am running Debian 12 with the Zabbly kernels and zfs builds.

See: GitHub - zabbly/linux: Linux kernel builds and GitHub - zabbly/zfs: OpenZFS builds

1 Like

I have switched now from dir driver to zfs.

First step was adding an external device as zfs pool directly in incus. Then i was moving the containers to that external drive with incus move container --storage ext-pool and added new custom volumes for storage and moved the data also to the external drive.

Then i deleted all the datasets in the old zfs mirror pool and added it as new pool in incus to move back the containers and data.

There is a huge improvment in read/write speed now.

However, for the new homelab i will maybe add an nvme mirror for the containers/vm and i will only store data-volumes at the hdd-mirror. I hope that increases the performance even more.

Thank you very much :slight_smile:

1 Like

regarding thease to volumes for images and backups…

i think, i don’t need a specific volume for images, because i mostly use the standard images provided by incus.

for the backups i have added an volume “backup_storage” wich get added by incus as dataset “storage_raid/custom/default_backup_storage”.

i can’t add this volume as backup volume. i get this error message from incus:
Config parsing error: Failed validation of “storage.images_volume”: Invalid syntax for volume, must be /

How should i add that backup volume to be able to configure it? via zfs command instead?

I looked in my bash history and found this.

incus storage create sda2 zfs source=/dev/sda2
incus storage volume create sda2 default-backups
incus config set storage.backups_volume sda2/default-backups

So first I created a new storage pool using the zfs storage driver. Then I created a volume in the pool called default-backups. At this point, it does not matter that zfs is being used. Then I configure incus to use the volume for backups.

At no point did I need to use zfs commands directly. In my case my pool is named sda2. Maybe you can use a similar syntax as I did.