While trying to add a new cluster member, a failed init seemed to add the ceph pool as an incus storage item. The join failed and I tried to reset to try again. Noticed the ceph pool and did an incus storage delete remote
Later, I noticed that the pool was missing from ceph osd pool ls and my instances in my cluster were storage zombies. Is there a safer way to cleanup a failed cluster join?
client incus: 1:7.0-ubuntu22.04-202605061506
server incus: 1:6.21-ubuntu22.04-202602110127
Thank you.