in the next months i will rebuild my homelab on new hardware.
On the current system i manage the zfs pool and datasets by myself, and why ever… they are passed using the dir driver.
I have one dataset for incus to store all containers. I think its better to have a dataset for each container/vm.
When possible or practicable i have sepparated the data into extra datasets, passed to the containers also using the dir driver, to be able to restore the container without touching its specific data used.
However, now I sometimes ran into an I/O-Bottleneck that slows down the whole system, due to zfs filesystem-writes, while the cpu is nearly in idle state…
I have read that the dir driver is not the best choice in case of performance and that it should be a better to use the incus zfs driver.
While setting up the new homelab i want to do that in the best/correct way for performance and usability.
So my question is, is there something like a “design pattern” for setting up the storage for incus, using zfs?
I would let Incus mange the datasets for you. When creating a new Incus storage pool you can create it using the zfs storage driver. I recommend passing in a full disk or a full disk partition. This is important for performance.
First step was adding an external device as zfs pool directly in incus. Then i was moving the containers to that external drive with incus move container --storage ext-pool and added new custom volumes for storage and moved the data also to the external drive.
Then i deleted all the datasets in the old zfs mirror pool and added it as new pool in incus to move back the containers and data.
There is a huge improvment in read/write speed now.
However, for the new homelab i will maybe add an nvme mirror for the containers/vm and i will only store data-volumes at the hdd-mirror. I hope that increases the performance even more.
regarding thease to volumes for images and backups…
i think, i don’t need a specific volume for images, because i mostly use the standard images provided by incus.
for the backups i have added an volume “backup_storage” wich get added by incus as dataset “storage_raid/custom/default_backup_storage”.
i can’t add this volume as backup volume. i get this error message from incus:
Config parsing error: Failed validation of “storage.images_volume”: Invalid syntax for volume, must be /
How should i add that backup volume to be able to configure it? via zfs command instead?
So first I created a new storage pool using the zfs storage driver. Then I created a volume in the pool called default-backups. At this point, it does not matter that zfs is being used. Then I configure incus to use the volume for backups.
At no point did I need to use zfs commands directly. In my case my pool is named sda2. Maybe you can use a similar syntax as I did.