Protecting managed ZFS dataset

Hi!

I am trying to decide on the best way to use Incus with ZFS. Do not get me wrong, but I generally do not like “managed” storage :wink: I really mean no offense, but here is how I see it: lets imagine I am building a large NAS server with Samba running in Incus-managed container. It will host my big movie collection. The data is not really unique, but impractical to backup off-site. Losing it would be a major inconvenience. If I create a custom storage volume for this file share, I understand that the volume is detached from the instance(s) that use it. Still I would like to make sure that this data is never deleted no matter what I do with Incus (except explicitly calling “…storage volume delete”). I want to make sure that a bug in Incus or someone’s design decision that, for example, uninstalling Incus should also delete all ZFS objects in the managed datasets will not destroy my data. I also want to retain the option of completely getting rid of Incus and preserving my volume data, mounting it at the host level. I hope I was able to explain the idea: managed objects are not evil, but I want a way to protect them from the deletion with “two keys” or something similar - like an additional operation that has to be done from outside of Incus to “unlock” the deletion process.

Are there any controls in Incus or ZFS that could block Incus from deleting the ZFS objects used as managed volumes?

Thanks!

Hello and welcome,

first off all backups are important, there is no question about it and it is your call if you need it or not. Think most of us have beaten by not performing it….

Now back to your question, there are multiple ways how you can solve it. Beside using Incus managed storage volumes you can also mount a path from your host (a ZFS dataset or just a sub folder) Type: disk - Incus documentation (Path on the host). Set the read-only flag on ZFS and also add the read-only flag during Incus mount which will prevent deleting by mistake. At least what Incus concerns as the host controls it.

There of course other options like Samba mount, NFS, etc. which can be used to mount it into the container. All depends on your architecture.

As mentioned in the beginning having a backup is always recommended just in case….

Not arguing about the importance of the data durability, of course. But accepting an additional factor that may lead to data loss (Incus management) is a decision to be made. Again, I am not claiming it is a problem, do not get me wrong - but it is an additional factor, strictly speaking.

I am currently using the disk device option. I wonder how much difference does it make (referring to “Incus operations are not optimized for this driver” statement in the documentation). Does it only apply to Incus usage of volumes for images, container roof fs etc? Or to general I/O?

Read-only ZFS flag on the dataset will prevent any write operations on it, too radical :slight_smile:

Some recommended creating an additional snapshot and setting the “hold” flag on it, which will prevent the deletion of the parent dataset because the snapshot will not be deleted. This seems like a possible option, although it goes against Incus requirement not to manually mess with its pools. And that sort of recommendation does make sense for any management entities in general, silly to argue.

Personally, I like the idea implemented by some cloud providers where a managed store service can be locked from deletion. This requires an unlock operation for delete to succeed. I think, this is an excellent pattern - this way you can safely code your automation without a fear that an accidental delete will destroy your data, since you will never code the automated unlocking operation. I wish Incus had a similar option for its custom volumes - just a simple API operation.