Unable to delete Ceph (rbd) storage if Ceph cluster prohibits OSD pool deletion

Current behavior
Incus cluster need to decommission Ceph storage (RBD). Incus storage used pre-created OSD pool and Ceph user configure for this storage does not have permission to delete OSD pool.
Unfortunately, incus does not allow to delete this storage as underlined command fails:
Error: Failed to run: ceph --name <client name> --cluster <ceph cluster name> osd pool delete <pool name> <pool name> --yes-i-really-really-mean-it: exit status 13 (Error EACCES: access denied)

Desired behavior:
Option --force or --unsafe which allows to delete references to the Ceph OSD Pool without deletion the pool itself would be useful in this case.

Simplified steps to reporduce

  1. Create Ceph storage using incus storage create <storage_name> ceph [options]
  2. Disable OSD Pools deletion ceph config set mon mon_allow_pool_delete false
  3. Try to delete created storage: incus storage rm <storage name>.

P.S.
Of course the Incus storage “removal” without deleting the data may introduce security risk. However, administrator needs understand this risk and be able to remove the storage without deleting OSD pool.

P.P.S.
I was tried to create the GIT issue, but the github always displays pop-up Unable to create issue

If Ceph is configured not to allow pool deletion, it’s perfectly normal for Incus to fail pool deletion. We’re not going to have Incus change a Ceph-wide setting to be able to delete a pool.

Stephane,

I’m not asking to manage a Ceph settings from the Incus. However, I’m asking to add ability to delete logical entity Storage if storage backend do not allow to do that. Current behavior require deleting Incus storage in one atomic transaction where the Incus requests backend to delete “physical” data and if it succeeded, the Incus deletes the own “logical” storage.

One of the valid use-cases is:

  • The “Incus deployment” and “Ceph storage” could be managed by different organizations and have different policies.

  • The “Incus deployment” “rents” storage pool(s) on the a “Ceph storage”.
    It requests the “Ceph storage” for a pool; The the “Ceph storage” creates it and set permissions to the pool according to own policies. The “Incus deployment” creates the incus storage pool and uses it.

  • Afterwhile the “Incus deployment” decides not decommit the storage rented from the “Ceph storage”, how it could to do if the Incus require to remove the storage backend and the logical storage in one atomic transaction?

In such scenario the “Incus deployment” admin needs have ability to delete “logical” storage from the Incus without deleting the ceph pool and then request the “Ceph storage” admin to remove the ceph pool.

What happens if you incus storage pool delete after the ceph pool was deleted?

This workflow looks a bit contrintuitive, but it works. :+1:

Steps to delete Ceph rbd storage if Ceph client does not have permission for ceph pool deletion:

  1. Delete ceph rbd pool using command ceph osd pool delete <pool name> <pool name> --yes-i-really-really-mean-it. Ceph user how run this command must have permissions for rbd pool(s) deletion and ceph cluster must be configured to allow pool deletions.
  2. Delete Incus storage using command: incus storage delete <storage name>

Much appreciated for the hint.

Yeah, in general Incus doesn’t mind you deleting something that’s already gone from the underlying storage as unlike the other way around, it doesn’t lead to a potential data conflict down the line.

Would it be possible to config incus to preserve ceph pool during delete storage object incide incus data?

checked sources, volatile.pool.pristine:false does what I need.