ZFS pool sharing

Hi all,

When I installed Incus on Bookworm, I already had a Zpool, so I configured Incus to use that.
I have subsequently read that Incus should have its own Zpool as it assumes it does and that could potentially lead to data loss as it might overwrite other datasets already on the pool.

What is the best way for me to remedy this?

Some background:
I am a home-labber but with many years experience of IT support including server and a general familiarity with Linux.
The device I am using has 2 NVME drives with a mdadm mirror which is running Debian Bookworm (so Incus 6.0.3) and 2 SATA drives with a ZFS mirror hosting the pool.
The pool is 14TB and as mentioned above, already had some datasets before I installed Incus.

In total, there is currently 284GB in the pool.

Thanks for any advice.

If you gave incus it’s own data set then it will be fine, e.g. for a zfs pool named tank:

zfs create tank/incuspool

And when you perform incus init, you point it to tank/incuspool. This would be completely fine and it’s what I do.

But if you store other files in the incus pool/dataset in addition to incus files then that could be a problem. If so, I would advise you to either:

i. remove the offending files if you can and storing them on another pool; OR

ii. backing up the pool, creating a new tank/incuspool type dataset, reconfiguring incus to point default storage to the new tank/incus pool and re-importing your backed up instances.

Good luck!

Andrew

Hi Andrew, thanks for the reply!

So I currently have tank/incus and tank/nasdisk (both are datasets); with tank being the name of the Zpool. So this should be OK as is?

1 Like

All my Incus environments are setup like this using their own dataset and I haven’t had or noticed any issues.

The recommendation about using it’s own zpool is that Incus owns / controls the available space compared to running in a dataset where it shares the available space with other applications. (I guess). As long as you monitor your disk capacity / usage you should be fine. At least from my experience…

1 Like

So if the tank/incus dataset grew too large, it could overwrite the tank/nasdisk dataset?
Surely ZFS wouldn’t allow that.

I can put a quota on the incus dataset too I suppose, but I can’t believe ZFS would allow a dataset to corrupt another.

ZFS won’t overwrite another dataset. It will error because of storage shortage. What you have should be fine.

1 Like

Yep. This i exactly what I do.

1 Like