I’ve now got access to a ceph cluster to show you what I meant.
So, first lets setup a ceph storage pool so we can break it to get the error you’re experiencing.
- Create storage pool:
lxc storage create ceph ceph
- Check for the placeholder volume:
rbd list --pool ceph
lxd_ceph
- Delete the placeholder volume from ceph:
rbd remove lxd_ceph --pool ceph
Removing image: 100% complete...done.
- Restart LXD and check the error logs and pool status:
DEBUG [2023-01-03T10:45:05Z] Initializing storage pool pool=ceph
DEBUG [2023-01-03T10:45:05Z] Mount started driver=ceph pool=ceph
DEBUG [2023-01-03T10:45:05Z] Mount finished driver=ceph pool=ceph
ERROR [2023-01-03T10:45:05Z] Failed mounting storage pool err="Placeholder volume does not exist" pool=ceph
lxc storage ls
+---------+--------+------------------------------------+-------------+---------+-------------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
+---------+--------+------------------------------------+-------------+---------+-------------+
| ceph | ceph | ceph | | 0 | UNAVAILABLE |
+---------+--------+------------------------------------+-------------+---------+-------------+
- Restore placeholder volume and wait for LXD to detect it:
rbd create lxd_ceph --pool ceph --size 0B
DEBUG [2023-01-03T10:47:05Z] Initializing storage pool pool=ceph
DEBUG [2023-01-03T10:47:05Z] Mount started driver=ceph pool=ceph
DEBUG [2023-01-03T10:47:05Z] Mount finished driver=ceph pool=ceph
INFO [2023-01-03T10:47:05Z] Initialized storage pool pool=ceph
INFO [2023-01-03T10:47:05Z] All storage pools initialized