Question regarding root disk size and Ceph storage backend

Hey, sorry if this question was asked and answered already. I was not able to find a satisfying answer.

If I use Ceph or Cephfs as a storage pool for LXD, will I be able to set the root disk size for an Instance, or is this not supported with Ceph?

The documentation mentions that Ceph does not support storage quotas, but I am not sure if storage quotas mean the same as limiting the available disk space for an Instance. The other posts in this forum related to Ceph and setting the disk size of the root disk gave me the impression that it is possible.

I would test it myself before asking. But I need an answer to this question before I can get the approval to set up an LXD cluster at my company.

Thanks for the help already!

Yeah, with Ceph, each instance is on a dedicated RBD volume, the default is 10GiB but it can be modified by:

  • Changing the default volume.size on the entire pool
  • Setting the size property on the root device in a profile
  • Overriding the root device on the instance and setting the size property

The storage doc is a bit confusing as Ceph does support quotas in the same way that LVM does (by having a block device of the size requested). I’ll send a fix to the doc to change it.

For Cephfs, it can only be used for shared custom volumes, it can’t hold instances or images itself. Cephfs volumes also support quotas as cephfs has xattrs that let you set that up. Whether quotas are enforced may depend on the ceph server and client version though (but anything from the past two years should be fine).

1 Like

Thank you for the quick clarification! :blush: