I have a storage pool that is backed by a single NVMe, and on one of the servers in the cluster this NVMe has died. I’m trying to edit the instance config to say that the custom storage volume is on another pool (or even temporarily remove), but incus config edit just hangs when saving, so I cannot restore services.
Questions:
How can one update an instance’s config to use a restored custom storage volume on another available pool whilst we wait for the faulty storage to be replaced ?
What would be the process of brining the replacement device on that server, back into the storage pool in the cluster, e.g. incus storage edit {pool} source={device-path} --target {cluster-node} , and get it to format with ZFS?
Phew, I must have been doing something silly because I cannot reproduce the hang for several instances. Sorry.
Could you advise on question 2 … it will be a blank NVMe with LUKS open in the path /dev/mapper/nvme1n1 which is the same source used when creating the storage pool.
For ZFS pools, Incus tracks them by name, it doesn’t really care about the source path used.
So you can manually use zpool create to create a new zpool with the same name as the one you lost and that should be fine.
Just make sure to create the basic dataset structure on it: