Erasure Coding and multi-hypervisor


(Greg) #1

My goal is to have a single, large, VM that is running openstack where I can run both VM’s and LXD containers simultaneously. There are better designs but my requirement is fixed, (1) vm, common dashboard for management, distributed images lxd & KVM via tarballs, support of VM’s & containers.

Proxmox gives me this to some extent but it is proprietary and the licensing is incompatible with the desired use and distribution of the environment.

So I have “successfully” deployed an Openstack environment in a vm using the openstack-on-lxd info as a guide. I have documented where I have deviated but the primary changes are as follows:

 lxd init setup suing btrfs instead of zfs
(2) distinct compute nodes in lxd containers, (1) using virt-type=kvm & (1) using virt-type=lxd
(6) ceph-osd's using bluestore and changing all ceph-osd-replication-count=1 in all support charms

Since the default install and config of ceph using replication instead of erasure coding, can I reconfigure ceph, post juju, to use erasure coding for the backend storage?

After I configure the environment for erasure coding, if possible, the plan is to create availability zones and flavours so that I can spin up lxd instances on the lxd compute node and vm’s on the kvm compute node.

Looking for guidance on ceph first the multi-hypervisor config second.


(Stéphane Graber) #2

@ajkavanagh something you could help with?


(Alex Kavanagh) #3

@quartzeye Interesting reading about your setup. So, just seeking to understand how you’ve got this setup?

But first, to answer you question:

Since the default install and config of ceph using replication instead of erasure coding, can I reconfigure ceph, post juju, to use erasure coding for the backend storage?

Yes, you can, if it was deployed using Juju. And, if you are using the latest juju charms, the ~openstack-charmers ones from the charm store, you can do the changes using Juju actions: see the https://jujucharms.com/ceph-osd/ and https://jujucharms.com/ceph-mon/ for more details on the actions. Note that ceph-mon now has a ton of actions to do with creating pools, erasure profiles, etc., but they’re not yet documented in the README. Please take a look in the actions/ directory in the charm.

My goal is to have a single, large, VM that is running openstack where I can run both VM’s and LXD containers simultaneously. There are better designs but my requirement is fixed, (1) vm, common dashboard for management, distributed images lxd & KVM via tarballs, support of VM’s & containers.

  • I’m guessing you used the guide at: https://docs.openstack.org/charm-guide/latest/openstack-on-lxd.html ?

  • So you have a host and then a single, large, libvirt VM. And in that you’ve deployed OpenStack using Juju in lxd containers? i.e. in the VM you bootstrapped the Juju controller into an LXD container, and then juju deployed a bundle to push the OpenStack control plane into LXD containers in the VM?

  • Inside that VM, you’ve configured two nova-compute Juju applications, one libvirt and one LXD, and thus you are also using the LXD charm?

So this means that, in the VM, you are using nested KVM (libvirt) and LXD?

So, how is the ceph configured? Are the OSDs external to the VM? How is the redundancy going to work with respect to where the OSDs are (Ceph defaults to splitting replicas by host, not disks)?