Reusing existing Ceph OSD pools

I was wondering about the ceph.osd.force_reuse flag and why it exists. What’s the issue of using the same Ceph osd pool between multiple lxd instances?

It’s there pretty much for a single reason, being able to re-add the CEPH pool prior to running data recovery with lxd import to get containers re-imported. The expectation is that all containers in the pool should be imported in that way prior to having LXD make full use of the pool (or else you may run into conflicts).

In general LXD assumes that the source of a storage pool is clean and that it’s free to create whatever it wants.

Sorry, I asked the wrong question.

What are the practical implications of running multiple LXD nodes on the same Ceph osd pools? Do I risk one node deleting a volume of an other node?

I want to create various osd pools with different underlying storage requirements (a 2 and 3 copies pool, a 1+3 pool, a 2+4 pool etc.) The idea is that I will have multiple LXD nodes that serve a couple of 100 containers where various volume will be used.

If I can’t re-use pools it would mean that for each new LXD node I setup I will also have to create the various pools over and over again. Not sure if this is an issue persé but it’s something I should follow up on in that case.

Separate pools for each node is definitely recommended as you will hit conflicts for sure if you have two nodes with the same images.

The other way to get what you want would be by using LXD clustering as that would then let you share OSD pools among all cluster nodes.

Thanks, that makes sense. :slight_smile: