@quartzeye Interesting reading about your setup. So, just seeking to understand how you’ve got this setup?
But first, to answer you question:
Since the default install and config of ceph using replication instead of erasure coding, can I reconfigure ceph, post juju, to use erasure coding for the backend storage?
Yes, you can, if it was deployed using Juju. And, if you are using the latest juju charms, the ~openstack-charmers ones from the charm store, you can do the changes using Juju actions: see the https://jujucharms.com/ceph-osd/ and https://jujucharms.com/ceph-mon/ for more details on the actions. Note that ceph-mon now has a ton of actions to do with creating pools, erasure profiles, etc., but they’re not yet documented in the README. Please take a look in the actions/
directory in the charm.
My goal is to have a single, large, VM that is running openstack where I can run both VM’s and LXD containers simultaneously. There are better designs but my requirement is fixed, (1) vm, common dashboard for management, distributed images lxd & KVM via tarballs, support of VM’s & containers.
-
I’m guessing you used the guide at: https://docs.openstack.org/charm-guide/latest/openstack-on-lxd.html ?
-
So you have a host and then a single, large, libvirt VM. And in that you’ve deployed OpenStack using Juju in lxd containers? i.e. in the VM you bootstrapped the Juju controller into an LXD container, and then juju deployed a bundle to push the OpenStack control plane into LXD containers in the VM?
-
Inside that VM, you’ve configured two nova-compute Juju applications, one libvirt and one LXD, and thus you are also using the LXD charm?
So this means that, in the VM, you are using nested KVM (libvirt) and LXD?
So, how is the ceph configured? Are the OSDs external to the VM? How is the redundancy going to work with respect to where the OSDs are (Ceph defaults to splitting replicas by host, not disks)?